Spectral Energy Mapping for EMG-based Recognition of Silent Speech
by , ,
Abstract:
This paper reports on our latest study on speech recognition based on surface electromyography (EMG). This technology allows for Silent Speech Interfaces since EMG captures the electrical potentials of the human articulatory muscles rather than the acoustic speech signal. Therefore, our technology enables speech recognition to be applied to silently mouthed speech. Earlier experiments indicate that the EMG signal is greatly impacted by the mode of speaking. In this study we analyze and compare EMG signals from audible, whispered, and silent speech. We quantify the differences and develop a spectral mapping method to compensate for these differences. Finally, we apply the spectral mapping to the front-end of our speech recognition system and show that recognition rates on silent speech improve by up to 12.3% relative.
Reference:
Spectral Energy Mapping for EMG-based Recognition of Silent Speech (Matthias Janke, Michael Wand, Tanja Schultz), In First International Workshop on Bio-inspired Human-Machine Interfaces and Healthcare Applications, 2010. (Side event of Biosignals 2010 conference)
Bibtex Entry:
@inproceedings{janke2010spectral,
  note={Side event of Biosignals 2010 conference},
  year={2010},
  title={Spectral Energy Mapping for EMG-based Recognition of Silent Speech},
  booktitle={First International Workshop on Bio-inspired Human-Machine Interfaces and Healthcare Applications},
  url={https://www.csl.uni-bremen.de/cms/images/documents/publications/WandSchultz_BInterface2010.pdf},
  abstract={This paper reports on our latest study on speech recognition based on surface electromyography (EMG). This technology allows for Silent Speech Interfaces since EMG captures the electrical potentials of the human articulatory muscles rather than the acoustic speech signal. Therefore, our technology enables speech recognition to be applied to silently mouthed speech. Earlier experiments indicate that the EMG signal is greatly impacted by the mode of speaking. In this study we analyze and compare EMG signals from audible, whispered, and silent speech. We quantify the differences and develop a spectral mapping method to compensate for these differences. Finally, we apply the spectral mapping to the front-end of our speech recognition system and show that recognition rates on silent speech improve by up to 12.3% relative.},
  author={Janke, Matthias and Wand, Michael and Schultz, Tanja}
}