Repository logo
 
Publication

Empowering deaf-hearing communication: exploring synergies between predictive and generative AI-based strategies towards (Portuguese) Sign Language interpretation

dc.contributor.authorAdão, Telmo
dc.contributor.authorOliveira, João
dc.contributor.authorShahrabadi, Somayeh
dc.contributor.authorJesus, Hugo
dc.contributor.authorFernandes, Marco
dc.contributor.authorCosta, Ângelo
dc.contributor.authorFerreira, Vânia
dc.contributor.authorGonçalves, Martinho Fradeira
dc.contributor.authorGuevara Lopez, Miguel Angel
dc.contributor.authorPeres, Emanuel
dc.date.accessioned2023-11-03T16:10:23Z
dc.date.available2023-11-03T16:10:23Z
dc.date.issued2023
dc.description.abstractCommunication between Deaf and hearing individuals remains a persistent challenge requiring attention to foster inclusivity. Despite notable efforts in the development of digital solutions for sign language recognition (SLR), several issues persist, such as cross-platform interoperability and strategies for tokenizing signs to enable continuous conversations and coherent sentence construction. To address such issues, this paper proposes a non-invasive Portuguese Sign Language (Língua Gestual Portuguesa or LGP) interpretation system-as-a-service, leveraging skeletal posture sequence inference powered by long-short term memory (LSTM) architectures. To address the scarcity of examples during machine learning (ML) model training, dataset augmentation strategies are explored. Additionally, a buffer-based interaction technique is introduced to facilitate LGP terms tokenization. This technique provides real-time feedback to users, allowing them to gauge the time remaining to complete a sign, which aids in the construction of grammatically coherent sentences based on inferred terms/words. To support human-like conditioning rules for interpretation, a large language model (LLM) service is integrated. Experiments reveal that LSTM-based neural networks, trained with 50 LGP terms and subjected to data augmentation, achieved accuracy levels ranging from 80% to 95.6%. Users unanimously reported a high level of intuition when using the buffer-based interaction strategy for terms/words tokenization. Furthermore, tests with an LLM—specifically ChatGPT—demonstrated promising semantic correlation rates in generated sentences, comparable to expected sentences.pt_PT
dc.description.versioninfo:eu-repo/semantics/publishedVersionpt_PT
dc.identifier.citationAdão, T., Oliveira, J., Shahrabadi, S., Jesus, H., Fernandes, M., Costa, Â., Ferreira, V., et al. (2023). Empowering Deaf-Hearing Communication: Exploring Synergies between Predictive and Generative AI-Based Strategies towards (Portuguese) Sign Language Interpretation. Journal of Imaging, 9(11), 235. https://doi.org/10.3390/jimaging9110235pt_PT
dc.identifier.doihttps://doi.org/10.3390/jimaging9110235pt_PT
dc.identifier.issn2313-433X
dc.identifier.urihttp://hdl.handle.net/10400.26/47818
dc.language.isoengpt_PT
dc.peerreviewedyespt_PT
dc.relation.publisherversionhttps://www.mdpi.com/2313-433X/9/11/235pt_PT
dc.titleEmpowering deaf-hearing communication: exploring synergies between predictive and generative AI-based strategies towards (Portuguese) Sign Language interpretationpt_PT
dc.typejournal article
dspace.entity.typePublication
person.familyNameGUEVARA LÓPEZ
person.givenNameMIGUEL ANGEL
person.identifierA-3126-2011
person.identifier.ciencia-id8910-E298-D967
person.identifier.orcid0000-0001-7814-1653
person.identifier.scopus-author-id36999281000
rcaap.rightsopenAccesspt_PT
rcaap.typearticlept_PT
relation.isAuthorOfPublication38c91a9b-1db6-4515-9462-b0a031edc325
relation.isAuthorOfPublication.latestForDiscovery38c91a9b-1db6-4515-9462-b0a031edc325

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
jimaging-09-00235-with-cover (1).pdf
Size:
3.52 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.85 KB
Format:
Item-specific license agreed upon to submission
Description: