
ABSTRACT
A typical human observes, hears, and responds to their environment. Some
people are not blessed with this significant ability. These people, who are
primarily deaf and dumb, rely on sign language for communication. However,
since not everyone can understand their sign language, it is extremely difficult for
them to communicate with healthy people, especially when they want to
participate in social, educational, and job-related activities. Creating a sign
language translation system is the goal of the work proposed in this thesis, to help
persons who are deaf and mute communicate with ordinary people. Sign language
recognition systems primarily use two main methods: vision-based and sensor
based approaches.
For the sensor-based approach, the designed system is used to recognize
American Sign Language letters, and it is based on a glove equipped with five
slide potentiometers and two force-sensitive resistors. The slide potentiometers
are used to detect finger's bending and the force-sensitive resistors are used to
detect contact between the adjacent fingers. The designed glove prototype is used
to create three data sets with different numbers of samples for each one of them.
The created data sets are trained and tested with two algorithms: Support Vector
Machine and Deep Neural Networks. The results show that the best test accuracy
for the designed system (98%) is obtained with the SVM algorithm after
optimizing its hyper-parameters.
For the vision-based approach, the device used is the Leap Motion Controller,
which can provide infrared imagery of the scene and skeletal data points for hands
and fingers. The infrared imagery technique is used to recognize British Sign
Language letters. The data set created is trained and tested with the Convolutional
Neural Network algorithm, and the accuracy obtained is (99%).