Name: | Description: | Size: | Format: | |
---|---|---|---|---|
2.04 MB | Adobe PDF |
Advisor(s)
Abstract(s)
In Portugal there are above 80,000 people with hearing impairment
with the need to communicate through the sign language. Equal opportunities and
social inclusion are the major concerns of the current society. It is aim of this
research to create and evaluate a Deep Learning model that using a dataset with
images of characters in Portuguese sign language can identify the gesture of a user,
recognizing it. For model training, 5826 representative samples of the characters
‘C’, ‘I’, ‘L’, ‘U’ and ‘Y’ in Portuguese sign language. The Deep Learning model
is based on a convolutional neural network. The model evaluated using the sample
allowed for an accuracy of 98.5%, which is considered as a satisfactory result.
However, there are two gaps: the existence of datasets with the totality of the
alphabet in the Portuguese sign language and with the various representations of
movement that each word has at the layout of letters. Using the proposed model
with more complete datasets would allow to develop more inclusive user interfaces
and equal opportunities for users with auditory difficulties.
Description
Keywords
Deep learning Inclusion user interfaces Portuguese sign language