AI decodes brain activity to reveal the meaning behind thoughts

AI decodes brain activity to reveal the meaning behind thoughts

Artificial intelligence (AI) has made non-invasive mind-reading possible by converting thoughts into text, according to a recent study published in the journal Nature Neuroscience. This achievement has the potential to revolutionize the field of communication, especially for patients struggling with speech after a stroke or motor neuron disease. The innovation involves an AI-based decoder that translates brain activity into a continuous stream of text with high accuracy. The system works by mapping patterns of neuronal activity to strings of words with particular meanings, rather than attempting to read out activity word by word.

Previously, language decoding systems required surgical implants. However, the new breakthrough offers a non-invasive solution. The decoder was developed by neuroscientists at the University of Texas at Austin who used functional magnetic resonance imaging (fMRI) to reconstruct speech from brain activity with uncanny accuracy. This achievement overcomes a fundamental limitation of fMRI, which is that while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real-time impossible.

The team trained the decoder using a large language model, GPT-1, a precursor to OpenAI’s ChatGPT, and used fMRI scans to generate text from brain activity alone. About half of the time, the text closely matched the intended meanings of the original words, and sometimes it precisely matched the meaning. The decoder worked at the level of ideas, semantics, and meaning rather than exact words.

The decoder also showed accuracy when participants watched short silent videos, and their brain activity accurately described some of the content. However, the system struggled with certain aspects of language, including pronouns. The decoder was personalized, and when the model was tested on another person, the readout was unintelligible.

The achievement opens up a host of experimental possibilities, including reading thoughts from someone dreaming or investigating how new ideas spring up from background brain activity. This breakthrough is technically impressive, and it can be a basis for the development of brain-computer interfaces. The team now hopes to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

More from Qonversations

Tech

Screenshot 2024 09 20 at 10.42.40 AM

Did you know? The first IoT device was a soda machine—here’s how it led to ATMs

Tech

shutterstock 340853102

India’s Union Cabinet approves ambitious US$3.8 million space missions

Tech

Screenshot 2024 09 16 at 1.11.08 PM

International Day of Science, Technology, and Innovation for the South: Why is it so important?

Tech

AI Image

Explainer: How China’s AI surge could outpace U.S. by next year

Front of mind