People who can’t talk without using their thoughts are able to use brain reading devices
A Robotic Voice for a Person with Paralysis that Can Talk for Theirself: Ann Bennett, a Stanford Neuroscientist, and Chang’s Team
A future in which people with paralysis can say whatever they want to say with accuracy high enough to be understood reliably is possible, thanks to the work of Francis Willett, a neuroscience professor at Stanford University.
Christian Herff says that these devices could be products in the future.
Bennett said in a statement to reporters that it was possible for those who are non verbal to remain connected to the larger world.
Edward Chang is a neuroscientist at the University of California, San Francisco that has worked with a woman who lost her ability to speak after a brainstem stroke.
They used a different approach from that of Willett’s team, placing a paper-thin rectangle containing 253 electrodes on the surface on the brain’s cortex. The technique, called electrocorticography (ECoG), is considered less invasive and can record the combined activity of thousands of neurons at the same time. The team trained AI algorithms to recognize patterns in Ann’s brain activity associated with her attempts to speak 249 sentences using a 1,024-word vocabulary. The device was able to produce 78 words per minute.
Chang and his team also created customized algorithms to convert Ann’s brain signals into a synthetic voice and an animated avatar that mimics facial expressions. The voice that they trained to sound like Ann is different from the one she had before her injury.
“The simple fact of hearing a voice similar to your own is emotional,” Ann told the researchers in a feedback session after the study. It was huge when I could talk for myself.
Brain-Computer Interfaces for Clinical Use: Two Case Studies of Speech and Muscle Using Brain-Cognizant Interfaces
Many improvements are needed before the BCIs can be made available for clinical use. “The ideal scenario is for the connection to be cordless,” Ann told researchers. Yvert says a BCI that was suitable for everyday use would have to have no visible wires or cables. Both teams want to continue increasing the accuracy and speed of their devices.
And the participants of both studies still have the ability to engage their facial muscles when thinking about speaking and their speech-related brain regions are intact, says Herff. “This will not be the case for every patient.”
The proof of concept provides motivation for industry people to translate it into something that someone can actually use, says Willett.
The devices must also be tested on many more people to prove their reliability. Judy Illes, a neuroscience researcher at the University of British Columbia in Canada says that no matter how sophisticated the data is, it has to be understood in context. “We have to be careful with over promising wide generalizability to large populations,” she adds. I am not confident that we are there yet.
Two studies demonstrate how brain-computer interfaces could help people to communicate, and working out how hot it can get before tropical leaves start to die.
What wind-tunnel experiments can help athletes run the fastest marathon, and what an analysis might reveal about why birds are the same colors.
Towards real-time natural conversation in tropical forests: How human brains are harnessed to translate and translate information from a scalp to a screen
As the climate warms, tropical forests around the world are facing increasing temperatures. It is not known how long the trees will live before their leaves start to die. The team has combined multiple data sources to try and answer the question and suggest that a warming of 3.9 C will cause leaves to reach a tipping point at which photosynthesis breaks down. This scenario would likely cause significant damage to these ecosystems’ role in vital carbon storage and as homes to significant biodiversity.
Paralysis had robbed the two women of their ability to speak. A disease that affects the motor neurons was the cause for one. The other had suffered a stroke in her brain stem. Though they can’t enunciate clearly, they remember how to formulate words.
While slower than the roughly 160-word-per-minute rate of natural conversation among English speakers, scientists say it’s an exciting step toward restoring real-time speech using a brain-computer interface, or BCI. “It is getting close to being used in everyday life,” says Marc Slutzky, a neurologist at Northwestern University who wasn’t involved in the new studies.
In the Stanford study, researchers developed a BCI that uses the Utah array, a tiny square sensor that looks like a hairbrush with 64 needle-like bristles. Each is tipped with an electrode, and together they collect the activity of individual neurons. A neural network is trained to identify brain activity and translate it to words on a screen.