Researchers at the University of California San Francisco have achieved promising results in their Brain-Computer Interface (BCI) project, demonstrating that someone with a severe speech loss could type out almost instantly what they wanted to say simply by trying to speak.
Facebook Reality Lab (FRL) launched this project in 2017, intending to develop a silent, non-invasive speech interface that would allow people to type simply by imagining the words they wanted to say.
The last phase of this project, named Project Steno, is the first demonstration of attempted speech combined with language models to drive a BCI. By decoding brain signals sent from the motor cortex to the vocal tract, this breakthrough has restored a person’s ability to communicate. This research is a remarkable milestone in the field of neuroscience, and it brings to a close Facebook’s years-long collaboration with UCSF’s Chang Lab.
Facebook’s funding enabled UCSF to increase its server capacity significantly, thereby allowing them to test more models simultaneously and achieve more accurate results.
Earlier UCSF research has successfully deciphered a small collection of entire spoken words and phrases from brain activity in real-time. In addition, Chang Lab research had demonstrated that their system could recognize a far bigger lexicon with very low word mistake rates. However, the researchers explain that these results were obtained while people were speaking out aloud. Thus they were unsure if their method would be able to decode words in real-time when people only intended to talk at the time.
The study’s recent findings demonstrate the successful decoding of attempted conversational communication in real-time. This project also explains how algorithms may employ language models to improve accuracy in brain-to-text communication.
This study examined a subject who had lost his capacity to talk properly after a series of strokes. Electrodes were implanted on the participant’s brain surface as part of an elective operation. Throughout the trial, the participant collaborated directly with the UCSF team to capture dozens of hours of BCI-assisted speech. UCSF then used this information to develop machine learning models for speech identification and classification. Despite being paralyzed for almost 16 years, the subject was able to actually converse in real-time as a result of this study.
The study reveals the use of statistical properties in the language to increase the accuracy of the BCI.
Just as your phone can auto-correct and auto-complete information in order to improve the precision of the information that you type into a text, with a BCI, they plan to employ the same technology to enhance the algorithm prediction.
Paper: https://ift.tt/3kqFHfR
A message from Asif Razzaq, Co-founder of Marktechpost:
Show your support for our mission ‘making AI understandable for all’ by joining/connecting through our 34k+ FB Group, LinkedIn Page and Quora AI Group.
Advertisement/Sponsored Post:
If you are a company looking to promote your product/webinar/conference/service, feel free to reach out via email to [email protected] We offer sponsored posts and advertisements.
Suggested
"interface" - Google News
July 31, 2021 at 04:38PM
https://ift.tt/3rLqO9A
A New Study By UCSF and Facebook Introduces Neural Interface That Restores Speech - MarkTechPost
"interface" - Google News
https://ift.tt/2z6joXy
https://ift.tt/2KUD1V2
Bagikan Berita Ini
0 Response to "A New Study By UCSF and Facebook Introduces Neural Interface That Restores Speech - MarkTechPost"
Post a Comment