You are here

Back to top

Device lets mute patients talk

apr-news-Device-Lets-Mute-Patients-Talk.png
Monday, 29 April 2019

Device lets mute patients talk

 

APRNEWS - A device capable of detecting and deciphering brain signals could give the gift of speech back to people who have lost the ability.

Electrodes fitted in the human can brain pick up on what a person wants to say and transforms it into a digital signal, which is then spoken by a voice synthesiser.

It was tested on five volunteers who already have electrodes in their brain as a treatment for epilepsy and the machine was able to speak 150 words per minute.

This, researchers from the University of California, United States, claim, is much faster than existing technology and equivalent to normal human conversation.

The study was published in the journal Nature.

The technology is reliant on brain signals designed to move parts of the face and throat involved in speech, such as movements of the jaw, larynx, lips and tongue.

It was found that although they are very complex they are also similar in most people.

Natural speech production involves more than 100 facial muscles, according to the scientists.

The technology includes giving people the ability to talk again as long as they are able to imagine mouthing the words. Signals from the brain are fed into a neural network computer linked to a voice synthesiser, similar to that used by the late Stephen Hawking – but far quicker.

The famed academic, who suffered with motor neuron disease for most of his adult life, was only able to speak around ten words a minute. They were produced using his cheek to select the words – a relatively slow process, despite being the best technology available at the time.

At the start of the study patterns of electrical activity were recorded from the brains of the volunteers as they spoke several hundred sentences aloud.

All the volunteers, four women and one man, were epilepsy patients who had undergone surgery to implant an electrode array on to their brain surfaces.

The passages were taken from well-known children’s stories, including Sleeping Beauty, The Frog Prince, and Alice In Wonderland.

The US team devised a system capable of translating brain signals responsible for individual movements of the vocal tract using the recordings.

In trials of 101 sentences, volunteer listeners were easily able to understand and transcribe the synthesised speech.

The research, led by Dr. Edward Chang from the University of California at San Francisco, US, is reported in the latest issue of the journal Nature.

The scientists wrote: “Listeners were able to transcribe synthesised speech well. Of the 101 synthesised trials, at least one listener was able to provide a perfect transcription for 82 sentences with a 25-word pool and 60 sentences with a 50-word pool.

With The Guardian