AI decodes brain activity to reveal the meaning behind thoughts

AI decodes brain activity to reveal the meaning behind thoughts

Artificial intelligence (AI) has made non-invasive mind-reading possible by converting thoughts into text, according to a recent study published in the journal Nature Neuroscience. This achievement has the potential to revolutionize the field of communication, especially for patients struggling with speech after a stroke or motor neuron disease. The innovation involves an AI-based decoder that translates brain activity into a continuous stream of text with high accuracy. The system works by mapping patterns of neuronal activity to strings of words with particular meanings, rather than attempting to read out activity word by word.

Previously, language decoding systems required surgical implants. However, the new breakthrough offers a non-invasive solution. The decoder was developed by neuroscientists at the University of Texas at Austin who used functional magnetic resonance imaging (fMRI) to reconstruct speech from brain activity with uncanny accuracy. This achievement overcomes a fundamental limitation of fMRI, which is that while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real-time impossible.

The team trained the decoder using a large language model, GPT-1, a precursor to OpenAI’s ChatGPT, and used fMRI scans to generate text from brain activity alone. About half of the time, the text closely matched the intended meanings of the original words, and sometimes it precisely matched the meaning. The decoder worked at the level of ideas, semantics, and meaning rather than exact words.

The decoder also showed accuracy when participants watched short silent videos, and their brain activity accurately described some of the content. However, the system struggled with certain aspects of language, including pronouns. The decoder was personalized, and when the model was tested on another person, the readout was unintelligible.

The achievement opens up a host of experimental possibilities, including reading thoughts from someone dreaming or investigating how new ideas spring up from background brain activity. This breakthrough is technically impressive, and it can be a basis for the development of brain-computer interfaces. The team now hopes to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

More from Qonversations

Tech

Foxconn

Foxconn and Nvidia’s world’s largest GB200 superchip plant in Mexico: What we know

Tech

Screenshot 2024 10 18 at 12.03.24 PM

Quantum computing: 5 things you need to know

Tech

Apple BYD

Apple’s secret partnership with China’s BYD: Why is it a big deal?

Tech

Screenshot 2024 10 17 at 11.15.32 AM

Can South Korean scientists’ discovery of electronic crystals pave the way for superconductivity in the future?

Front of mind