AI decodes brain activity to reveal the meaning behind thoughts

AI decodes brain activity to reveal the meaning behind thoughts

Artificial intelligence (AI) has made non-invasive mind-reading possible by converting thoughts into text, according to a recent study published in the journal Nature Neuroscience. This achievement has the potential to revolutionize the field of communication, especially for patients struggling with speech after a stroke or motor neuron disease. The innovation involves an AI-based decoder that translates brain activity into a continuous stream of text with high accuracy. The system works by mapping patterns of neuronal activity to strings of words with particular meanings, rather than attempting to read out activity word by word.

Previously, language decoding systems required surgical implants. However, the new breakthrough offers a non-invasive solution. The decoder was developed by neuroscientists at the University of Texas at Austin who used functional magnetic resonance imaging (fMRI) to reconstruct speech from brain activity with uncanny accuracy. This achievement overcomes a fundamental limitation of fMRI, which is that while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real-time impossible.

The team trained the decoder using a large language model, GPT-1, a precursor to OpenAI’s ChatGPT, and used fMRI scans to generate text from brain activity alone. About half of the time, the text closely matched the intended meanings of the original words, and sometimes it precisely matched the meaning. The decoder worked at the level of ideas, semantics, and meaning rather than exact words.

The decoder also showed accuracy when participants watched short silent videos, and their brain activity accurately described some of the content. However, the system struggled with certain aspects of language, including pronouns. The decoder was personalized, and when the model was tested on another person, the readout was unintelligible.

The achievement opens up a host of experimental possibilities, including reading thoughts from someone dreaming or investigating how new ideas spring up from background brain activity. This breakthrough is technically impressive, and it can be a basis for the development of brain-computer interfaces. The team now hopes to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

More from Qonversations

Tech

Screenshot 2024 11 22 at 12.43.46 AM

Could this flood detector save lives in Malawi?

Tech

Screenshot 2024 11 20 at 11.45.16 PM

Powering cities from afar: Can ultra-high-voltage lines fuel the renewable future?

Tech

Screenshot 2024 11 20 at 11.29.30 AM

Did you know? GPS relies on Einstein’s Theory of Relativity

Tech News

OpenAI

Is AI stealing news? Indian Agency takes OpenAI to court

Front of mind