Speaking silently and still being understood? #Silent Speech Interfaces

At the sixth Main Event of the Bremen.AI Cluster, it was once again time to discuss with many local and regional actors from the region about fields of application of AI. Three speakers were invited to report about their projects. One of them we would like to present here.

Under the keyword “Human-machine Interaction” Professor Dr. Tanja Schultz presented the Cognitive Systems Lab (CSL) at the University of Bremen and parts of its work. The CSL is concerned with human-centered technologies and applications based on biosignals, such as the acquisition, recognition and interpretation of speech, muscle and brain activity.

Speaking silently and still being understood by your conversation partner – should that be possible?

Yes, this is exactly what the CSL has been researching for many years under the keyword “silent speech interfaces”. The aim was to find out how sounds or speech can be derived from the movements of muscle groups in the jaw without the vocal cords moving. The prototype, which has been further developed since 2004, can master this challenge. The technology for this is based on the detection and recording of electrical potentials that arise from muscle activity – electromyography for short. Using sensors and algorithms, silent sentences are derived from muscle movements and translated into an artificial voice.

In the future, this technology could be integrated into smartphones so that your conversation partner would hear the sentences you have produced without a voice in an artificial voice. Patients who are unable to use their vocal cords after laryngectomy could benefit from such technology. More information is available on the Cognitive Systems Lab homepage of the University of Bremen. https://www.uni-bremen.de/en/csl/research/silent-speech-communication/