Brain-reading gadgets allow people with Cerebral palsy to talk using their thoughts
Towards a More Accurate Brain Interface for Natural Speech and Neurobiology: A Brief Briefing on the Challenges of Brain-Computer Interfaces
Two studies demonstrate how brain-computer interfaces could help people to communicate, and working out how hot it can get before tropical leaves start to die.
The studies used the same strategies and resulted in the same results. The error rate when the study was limited to a 50-word vocabulary was 9.1 percent and it was more than 18 percent when expanded to a 125,000-word vocabulary. After about four months, the Stanford algorithm could convert brain signals to words at about 68 words per minute. The UC San Francisco and Berkeley algorithm was able to decode at a median rate of 78wpm. It had an errorrate of 25% for a 1,024-word vocabulary, and an error rate of 8.2 percent for a 118-word vocabulary.
Although a 23 to 25 percent error rate isn’t good enough for everyday use, it’s a significant improvement over existing tech. In a press briefing, Edward Chang, chair of neurological surgery at UCSF and co-author of the UCSF study, noted that the effective rate of communication for existing technology is “laborious” at five to 15wpm when compared to the 150 to 250wpm for natural speech.
Chang said that sixty to 70 wpm was a real milestone for the field, coming from two different centers and two different approaches.
These studies are more proof of concept than a technology that’s ready for prime time. There are likely to be issues with these treatments requiring long sessions to train. Researchers from both teams said at the briefing that they hoped that training would be less intensive in the future.
“These are very early studies and we don’t have a big database of data from other people. As we do more of these recordings and get more data, we should be able to transfer what the algorithms learn from other people to new people,” says Frank Willett, a research scientist at the Howard Hughes Medical Institute and co-author of the Stanford study. Willett did note that wasn’t guaranteed, however, and more research was needed.
Another issue is that the tech has to be easy enough for people to use at home, without requiring caregivers to go through complicated training. Brain implants are also invasive, and in these particular studies, the BCI had to be connected via wires to a device on the outside of the skull that was then attached to a computer. There are also concerns about electrode degradation and the fact that these may not be permanent solutions. The tech will need to be rigorously tested to get to consumers, which can be lengthy and expensive.
Brain-reading devices allow paralysed people to think about talking using their thoughts: An exploratory study of a 47-year-old woman recovering from her brain stroke
The potential benefit of this technology is tremendous if it can be widely implemented, and Chang says that they have crossed a threshold of performance that they are excited about. “We are thinking about that quite seriously and what the next steps are.”
These devices “could be products in the very near future”, says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands.
“For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships,” said Bennett in a statement to reporters.
In a separate study2, Edward Chang, a neurosurgeon at the University of California, San Francisco, and his colleagues worked with a 47-year-old woman named Ann, who lost her ability to speak after a brainstem stroke 18 years ago.
Although the implants used by Willett’s team, which capture neural activity more precisely, outperformed this on larger vocabularies, it is “nice to see that with ECoG, it’s possible to achieve low word-error rate”, says Blaise Yvert, a neurotechnology researcher at the Grenoble Institute of Neuroscience in France.
Chang and his team also created customized algorithms to convert Ann’s brain signals into a synthetic voice and an animated avatar that mimics facial expressions. They personalized the voice to sound like Ann’s before her injury, by training it on recordings from her wedding video.
The researchers received feedback after the study and Ann said that hearing a voice like your own is emotional. Being able to talk for myself was big for me.
And the participants of both studies still have the ability to engage their facial muscles when thinking about speaking and their speech-related brain regions are intact, says Herff. “This will not be the case for every patient.”
Source: Brain-reading devices allow paralysed people to talk using their thoughts
Developing a technology for running and running marathons: Can wind tunnel experiments determine the fate of tropical forests? A proposal for a 3.8 C warming scenario
“We see this as a proof of concept and just providing motivation for industry people in this space to translate it into a product somebody can actually use,” says Willett.
How wind-tunnel experiments can help athletes run the fastest marathon, as well as an analysis that can explain why birds are the colors they are.
Tropical forests are facing increasing temperatures as the climate warms. But it’s unknown how much the trees can endure before their leaves start to die. A team combines multiple sources to attempt and answer the question of whether a warming of 3.8 C would lead to leaves reaching a tipping point at which photosynthesis breaks down. This scenario would likely cause significant damage to these ecosystems’ role in vital carbon storage and as homes to significant biodiversity.