Research Areas


Research in the Speech Disorders & Technology Lab (SDTL) is highly interdisciplinary with a focus on motor speech disorders and assistive speech technologies. Those topics are across several disciplines including speech science, motor speech disorders, neuroscience/neuroimaging, computer science, biomedical engineering and electrical and computer engineering. Specifically, the research topics in the lab are, but not limited to, below:

  • Assistive speech technologies including silent speech interface for laryngectomees, dysarthric speech recognition and analysis, speech synthesis

  • Neurogenic motor speech disorders (e.g., due to amyotrophic lateral sclerosis or ALS)

  • Neural speech decoding for brain-computer interfaces (BCIs)


Research Demos


 

Articulatory Distinctiveness Space of Normal, Whispered, and Silent Vowels

To better unerstand the general articulatory distinctiveness of vowels in different speech conditions (normal, whispered, and silent),  a novel, robust approach called articulatory vowel distinctineness space (AVDS) was used to generate a space that is based on the general kinematic pattern differerence (rather than direct tongue and lip displament/positions).  See the AVDS below, where  silent vowel space is smaller than whispered space, which is smaller than normal (voiced) space.  This finding suggests silent and whispered vowel production are less  distinc than voiced vowel production without vocal fold vibration (Teplansky et al., ICPhS 2019).


Integrating Articulatory Information in Deep-learning Text-to-Speech Synthesis

Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)-based text-to-speech (TTS) synthesis. Recently, deep learning-based TTS has been demonstrated outperforming HMM-based approaches. This works investigated integrating articulatory information in deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, and (2) direct integration plus a forward-mapping network, where the output articulatory features were mapped to acoustic features by an additional DNN. Experimental results show adding articulatory information significantly improved the performance (Picture adapted from Cao et al., 2017).



 

DJ and his Friend: A Demo of Conversation Using the Real-Time Silent Speech Interface

This demo shows how the silent speech interface is used in a daily conversation. DJ (the user) is using the silent speech interface to communicate with his friend (not shown on the screen). DJ is mouthing (i.e., without producing any voice) and the silent speech interface displays the text on the screen, and produces synthesized sounds (female voice) (Wang et al., SLPAT 2014). (See Demo 2 with Leslie)


Demo of Algorithm for Word Recognition from Continuous Articulatory Movements

In the demo below, the top panel plots the input (x, y, and z coordinates of sensors attached on the tongue and lips); the bottom panel shows the predicted sounds (time in red) and actual sounds (time in blue). This algorithm conducts segmentation (detection of onsets and offsets of the words) and recognition simultaneously from the continuous tongue and lip movements (Wang et al., Interspeech 2012; SLPAT 2013).


Demo of Algorithm for Sentence Recognition from Continuous Articulatory Movements

In the demo below, the top panel plots the input (x, y, and z coordinates of sensors attached on the tongue and lips); the bottom panel shows the predicted sounds (time in red) and actual sounds (time in blue). This algorithm conducts segmentation (detection of onsets and offsets of the sentences) and recognition simultaneously from the continuous tongue and lip movements (Wang et al., ICASSP 2012).


Quantitative Articulatory Vowel Space

The left part of the graphic is the quantitative articulatory vowel space I derived from more than 1,500 vowel samples of tongue and lip movements collected from ten speakers, which resembles the long-standing descriptive articulatory vowel space (right part). I’m now investigating the scientific and clinical applications of the quantitative articulatory vowel space (Wang et al., Interspeech 2011; JSLHR 2013).


Articulatory Consonant Space

Using the same approach, articulatory consonant spaces were derived using about 2,100 consonant samples of tongue and lip movements collected from ten speakers. See the figure below (2D on the left and 3D on the right). Both consonant spaces are consistent with the descriptive articulatory features that distinguish consonants (particularly place of articulation). Another interesting finding is a third dimension is not necessary for the articulatory vowel space, but very useful for consonant space. I’m now investigating the scientific and clinical applications of the articulatory consonant space as well (Wang et al., JSLHR 2013).


Speech Motor Control of Amyotrophic Lateral Sclerosis (ALS)

This is a collboarative proeject with MGH, U of Toronto, and UT Dallas, where where SDTL focuses on the articulatory sub-system of ALS bulbar system (Green et al., ALSFD 2013;  see a test protocol video on JOVE).