Sign Language Recognition for Static and Dynamic Gestures

Authors

  • Jay Suthar

  • Devansh Parikh

  • Tanya Sharma

  • Avi Patel

Keywords:

indian sign language, skin-segmentation, CNN (convolutional neural network), LSTM (long shortterm memory)

Abstract

Humans are called social animals, which makes communication a very important part of humans. Humans use shoes and non-verbal forms of language for communication purposes, but not all humans can give oral speech. Hearing impaired and mute people. Sign language became consequently advanced for them and nevertheless impairs communication. Therefore, this paper proposes a system that uses streams to use CNN networks for the classification of alphabets and numbers. Alphabet and number gestures are static gestures in Indian sign language, and CNN is used because it provides very good results for image classification. Use hand-masked (skin segmented) images for model training. For dynamic hand gestures, the system uses the LSTM network for classification tasks. LSTMs are known for their accurate prediction of time zone distributed data. This paper presents different types of hand gestures, namely two models for static and dynamic prediction, CNN and LSTM.

How to Cite

Jay Suthar, Devansh Parikh, Tanya Sharma, & Avi Patel. (2021). Sign Language Recognition for Static and Dynamic Gestures. Global Journal of Computer Science and Technology, 21(D2), 1–3. Retrieved from https://computerresearch.org/index.php/computer/article/view/2058

Sign Language Recognition for Static and Dynamic Gestures

Published

2021-05-15