Loading...
6 results
Search Results
Now showing 1 - 6 of 6
- Sibilant consonants classification with deep neural networksPublication . Anjos, Ivo; Marques, Nuno; Grilo, Ana Margarida; Guimarães, Isabel; Magalhães, João; Cavaco, SofiaAbstract. Many children su ering from speech sound disorders cannot pronounce the sibilant consonants correctly. We have developed a serious game that is controlled by the children's voices in real time and that allows children to practice the European Portuguese sibilant consonants. For this, the game uses a sibilant consonant classi er. Since the game does not require any type of adult supervision, children can practice the production of these sounds more often, which may lead to faster improvements of their speech. Recently, the use of deep neural networks has given considerable improvements in classi cation for a variety of use cases, from image classication to speech and language processing. Here we propose to use deep convolutional neural networks to classify sibilant phonemes of European Portuguese in our serious game for speech and language therapy. We compared the performance of several diferent arti cial neural networks that used Mel frequency cepstral coefcients or log Mel lterbanks. Our best deep learning model achieves classi cation scores of 95:48% using a 2D convolutional model with log Mel lterbanks as input features.
- 3D facial video retrieval and management for decision support in speech and language therapyPublication . Carrapiço, Ricardo; Guimarães, Isabel; Grilo, Ana Margarida; Cavaco, Sofia; Magalhães, João3D video is introducing great changes in many health related areas. The realism of such information provides health professionals with strong evidence analysis tools to facilitate clinical decision processes. Speech and language therapy aims to help subjects in correcting several disorders. The assessment of the patient by the speech and language therapist (SLT), requires several visual and audio analysis procedures that can interfere with the patient's production of speech. In this context, the main contribution of this paper is a 3D video system to improve health information management processes in speech and language therapy. The 3D video retrieval and management system supports multimodal health records and provides the SLTs with tools to support their work in many ways: (i) it allows SLTs to easily maintain a database of patients' orofacial and speech exercises; (ii) supports three-dimensional orofacial measurement and analysis in a non-intrusive way; and (iii) search patient speech-exercises by similar facial characteristics, using facial image analysis techniques. The second contribution is a dataset with 3D videos of patients performing orofacial speech exercises. The whole system was evaluated successfully in a user study involving 22 SLTs. The user study illustrated the importance of the retrieval by similar orofacial speech exercise.
- A serious mobile game with visual feedback for training sibilant consonantsPublication . Anjos, Ivo; Grilo, Ana Margarida; Ascensão, Mariana; Guimarães, Isabel; Magalhães, João; Cavaco, SofiaAbstract. The distortion of sibilant sounds is a common type of speech sound disorder (SSD) in Portuguese speaking children. Speech and language pathologists (SLP) frequently use the isolated sibilants exercise to assess and treat this type of speech errors. While technological solutions like serious games can help SLPs to motivate the children on doing the exercises repeatedly, there is a lack of such games for this specic exercise. Another important aspect is that given the usual small number of therapy sessions per week, children are not improving at their maximum rate, which is only achieved by more intensive therapy. We propose a serious game for mobile platforms that allows children to practice their isolated sibilants exercises at home to correct sibilant distortions. This will allow children to practice their exercises more frequently, which can lead to faster improvements. The game, which uses an automatic speech recognition (ASR) system to classify the child sibilant productions, is controlled by the child's voice in real time and gives immediate visual feedback to the child about her sibilant productions. In order to keep the computation on the mobile platform as simple as possible, the game has a client-server architecture, in which the external server runs the ASR system. We trained it using raw Mel frequency cepstral coe cients, and we achieved very good results with an accuracy test score of above 91% using support vector machines.
- Fidedignidade inter e intra-juízes na medição da taxa diadococinética oral em criançasPublication . Macedo, Filipa; Grilo, Ana MargaridaObjetivo: O objetivo deste estudo é o de verificar a fidedignidade intra e inter-juízes na avaliação da taxa diadococinética oral, em dois momentos de avaliação (com duas semanas de intervalo). Métodos: Cinco terapeutas da fala avaliaram registos áudio (através do programa Audacity™ e de auscultadores SENNHEISER HD201) com cinco tarefas diadococinéticas (três ciclos monossilábicos, um ciclo dissilábico e um ciclo trissilábico) de trinta e duas crianças, num primeiro e num segundo momento. Os resultados para a fidedignidade inter-juízes foram obtidos através do Alfa de Cronbach e para a fidedignidade intra-juízes foi utilizado o coeficiente de correlação intraclasse para a obtenção dos resultados. Resultados: Embora os resultados da variável “duração” não tenham sido todos ótimos (α entre 0.54 e 0.98), é possível constatar que nas variáveis “número de sílabas” (α entre 0.96 e 1) e “taxa diadococinésia” (α entre 0.94 e 0.99) existe concordância inter e intra-juízes com qualidade excelente. O coeficiente de correlação intraclasse obteve sobretudo resultados de fidedignidade excelente em todas as variáveis, apresentando também alguns resultados de fidedignidade satisfatória. Conclusão: Os resultados obtidos são discrepantes ao nível da avaliação inter-juízes na variável “duração”, mas foi observado que a “taxa diadococinética” e o “número de ciclos”, seguindo as regras padronizadas no presente estudo, apresentaram uma excelente fidedignidade.
- A model for sibilant distortion detection in childrenPublication . Anjos, Ivo; Grilo, Ana Margarida; Ascensão, Mariana; Guimarães, Isabel; Magalhães, João; Cavaco, SofiaThe distortion of sibilant sounds is a common type of speech sound disorder in European Portuguese speaking children. Speech and language pathologists (SLP) use different types of speech production tasks to assess these distortions. One of these tasks consists of the sustained production of isolated sibilants. Using these sound productions, SLPs usually rely on auditory perceptual evaluation to assess the sibilant distortions. Here we propose to use an isolated sibilant machine learning model to help SLPs assessing these distortions. Our model uses Mel frequency cepstral coefficients of the isolated sibilant phones and it was trained with data from 145 children. The analysis of the false negatives detected by the model can give insight into whether the child has a sibilant production distortion. We were able to confirm that there exist some relation between the model classification results and the distortion assessment of professional SLPs. Approximately 66% of the distortion cases identified by the model are confirmed by an SLP as having some sort of distortion or are perceived as being the production of a different sound.
- Speech sounds data for typically developing european portuguese children 6-9 years oldPublication . Guimarães, Isabel; Ascensão, Mariana; Grilo, Ana MargaridaPurposes: To identify the European Portuguese (EP) speech sounds competence in children. Methods: A total of 240 children between 6 and 9;11 years old named 37 pictures. Gender and age effect as well as the age limit for EP speech sound mastery were analyzed. The percentage of consonants correct (PCC) were determined. The criteria used were PCC ≥75% (acquired sound) and ≥90% (mastered sound). Results: No gender effect for speech sound development was found in the studied age range. Children with older ages [8-9;11] showed a slightly significant mean performance than younger ages [6-7;11]. The girls appeared to reach higher mean competence than boys; however, gender effect did not reach significance. At the [6-6;11] years old age range all plosives (except the word-medial /t/ and /g/), four fricatives (/f/, /v/, word-initial /ʃ/ and word-medial /Ʒ/) and two laterals (word-medial /r/ and word-initial and medial /R/) are mastered. The other targeted sounds are mastered either at the [7-7;11] or at the [8-8;11] year old range. Conclusion: The EP targeted speech sounds are mastered between 6 and 8;11 years old.