Premium Only Content

From Thought to Text: AI Converts Silent Speech into Written Words - Neuroscience News
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
From Thought to Text: AI Converts Silent Speech into Written Words - Neuroscience News
Summary: A novel artificial intelligence system, the semantic decoder, can translate brain activity into continuous text. The system could revolutionize communication for people unable to speak due to conditions like stroke. This non-invasive approach uses fMRI scanner data, turning thoughts into text without requiring any surgical implants. While not perfect, this AI system successfully captures the essence of a person’s thoughts half of the time. Key Facts: The semantic decoder AI was developed by researchers at The University of Texas at Austin. It works based on a transformer model similar to the ones that power Open AI’s ChatGPT and Google’s Bard. The system has potential for use with more portable brain-imaging systems, like functional near-infrared spectroscopy (fNIRS). Source: UT Austin A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again. The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. Credit: Neuroscience News The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard. Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone. “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.” The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words. For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’” Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable. “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.” Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Credit: Neuroscience News In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos whi...
-
1:15:32
Tucker Carlson
3 hours agoICE Protests and Antifa Riots: Tucker Carlson Warns of Total Destruction if America Doesn’t Act Fast
38.6K153 -
LIVE
I_Came_With_Fire_Podcast
10 hours agoChinese Spy GETS OFF | Is Comey's Indictment Selective | Posse Comitatus Dilemma
163 watching -
LIVE
Adam Does Movies
12 hours agoTalking Movies + Ask Me Anything - LIVE
43 watching -
5:46
Gun Owners Of America
8 hours agoNew Data Shows Voters Want Pro Gun Politicians
6.63K4 -
9:22:30
Dr Disrespect
12 hours ago🔴LIVE - DR DISRESPECT - BLACK OPS 7 - BANG BANG BANG
124K5 -
6:54
China Uncensored
11 hours agoA SHOCKING Discovery Proves We're Already At War With China
12.9K30 -
LIVE
Spartan
5 hours agoOMiT Spartan | Ghost of Yotei, Halo later maybe (Scrims chalked, teammates are sick)
47 watching -
1:40:50
Badlands Media
20 hours agoAltered State S3 Ep. 49
26.2K7 -
DVR
StevieTLIVE
4 hours agoStevie T RETURNS Warzone Quads w/ The Boys
10K -
16:45
Ohio State Football and Recruiting at Buckeye Huddle
10 hours agoOhio State Football: Can Illinois Pull Off Another Shocking Upset of the Top-Ranked Buckeyes
5.19K