Scientists use brain scans and AI to ‘decode’ thoughts Pipa News

Scientists use brain scans and AI to ‘decode’ thoughts

Paris: Scientists said on Monday they have found a way to use brain scans and artificial intelligence modeling to “transcribe” what people are thinking, in a step towards mind reading. as was described.

While the main goal of the language decoder is to help people who have lost the ability to communicate, the US scientists acknowledged that the technology raised questions about “mental privacy”.

Aiming to allay such fears, they ran tests showing that their decoder could not be used on someone who placed it inside a functional magnetic resonance imaging (fMRI) scanner with their brain for a long period of time. Was not allowed to train on the activity of

Previous research has shown that a brain implant could enable people who can no longer speak or type words or sentences to write.

These “brain-computer interfaces” focus on the part of the brain that controls the mouth when trying to form words.

Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of a new study, said his team’s language decoder “works at a very different level”.

“Our system really works at the level of ideas, at the level of semantics, at the level of meaning,” Huth said at an online press conference.

According to the study in the journal Nature Neuroscience, this system is the first to be able to reconstruct continuous language without invasive brain implants.

For the study, three people spent a total of 16 hours inside an fMRI machine, listening to spoken stories, mostly podcasts such as The New York Times’ Modern Love.

This allowed the researchers to track how words, phrases and meanings prompted responses in areas of the brain known to process language.

They fed this data into a neural network language model that uses GPT-1, a predecessor to the AI ​​technology that was later deployed in the hugely popular ChatGPT.

The model was trained to predict how each person’s brain would respond to perceived speech, then narrow down the options until the closest response was found.

To test the accuracy of the model, each participant listened to a new story in the fMRI machine.

Jerry Tang, first author of the study, said that the decoder could “retrieve a summary of what the user was listening to”.

For example, when the participant heard the phrase “I don’t have a driver’s license yet,” the model came back with “She hasn’t even started learning to drive yet.”

The decoder struggled with personal pronouns such as “I” or “he,” the researchers acknowledged.

But even when participants thought about their own stories — or watched silent films — the decoder was still able to discern “the gist,” he said.

This showed that “we are decoding something that is deeper than language, then converting it into language,” Huth said.

Because fMRI scanning is too slow to capture individual words, it collects “a mishmash, a bunch of information in a few seconds,” Huth said.

,

Most Popular

Most Popular