
The doctoral thesis of researcher Khaled Ali Abbas from the College of Engineering, University of Basra, Department of Electrical Engineering, was discussed. Machine Learning Based Upper Limbs Gestures Identification" It includes: Hand Gesture Recognition (HGR) is a fast-growing field that has wide-
ranging applications in human-computer interaction, robotics, assistive
technology, and medical sciences. This study presents an algorithm based
on machine learning for HGR extracted from MMG, accelerometer, and
gyroscope signals, stratified in three methodological phases.
Initially, a pre-collected dataset (MMG-DATA) comprising eight distinct
hand gestures recorded from 35 participants using a triaxial accelerometer
was utilized. A comprehensive set of descriptive statistical features was
extracted and used to train five supervised learning classifiers: decision tree
(DT), linear discriminant analysis (LDA), naïve Bayes (NB), support vector
machine (SVM), and k-nearest neighbor (k-NN). Performance evaluation
was conducted under two training/testing scenarios: 80% training and 20%
testing, and an equal split of 50% for both. Among the classifiers, the SVM
consistently achieved the highest accuracy, confirming the stability and
reliability of the proposed approach.
The second phase involved the development of a novel dataset,
termed HGAG-DATA, collected using a custom-designed wearable
prototype that integrates both accelerometer and gyroscope sensors. This
dataset comprises recordings of 11 distinct hand gestures performed by 43
participants from diverse demographic backgrounds, resulting in a total of
23,650 six-dimensional (6D) data samples. HGAG-DATA, which will be
made publicly available via Mendeley Data, is intended to support the
training, validation, and generalization of hand gesture recognition (HGR)models. Its broad applicability spans multiple domains, including gaming,
healthcare, and assistive technologies.
In the final stage, the performance of four classifiers, Decision Tree
(DT), k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), and
Ensemble Bagged Trees (EBT), was evaluated under varying data
structures, training/testing configurations, and feature vector dimensions.
Among these, the EBT classifier achieved the highest accuracy at 99.23%,
followed closely by SVM and k-NN. The classifiers exhibited robust and
consistent performance even when exposed to novel data patterns and
reduced feature sets, highlighting the generalizability and effectiveness of
the proposed approach for real-world hand gesture recognition applications.
This study demonstrates the value of fusing mechanomyography
(MMG) and inertial data for accurate gesture recognition, underscores the
importance of well-curated datasets, and highlights the potential of the
proposed models to be effectively deployed in real-world applications