Pulse logo
Pulse Region

Scientists Create Speech From Brain Signals

Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.)

“It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Florida, who was not a member of the research group.

Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking.

The new system, described on Wednesday in the journal Nature, decipher the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence.

“We showed, by decoding the brain activity guiding articulation, we could simulate speech that is more accurate and natural sounding than synthesized speech based on extracting sound representations from the brain,” said Dr. Edward Chang, a professor of neurosurgery at University of California, San Francisco and an author of the new study. His colleagues were Gopala K. Anumanchipalli, also of UCSF, and Josh Chartier, who is affiliated with both UCSF and UC Berkeley.

The researchers also found that a synthesized voice system based on one person’s brain activity could be used, and adapted, by someone else — an indication that off-the-shelf virtual systems could be available one day.

This article originally appeared in The New York Times.

Subscribe to receive daily news updates.

Next Article