Is there a secret to longevity? This health expert says 1,000% yes
In the era of social media, post-COVID, and with mental health at the forefront, a shift is taking […]
Artificial intelligence (AI) has made non-invasive mind-reading possible by converting thoughts into text, according to a recent study published in the journal Nature Neuroscience. This achievement has the potential to revolutionize the field of communication, especially for patients struggling with speech after a stroke or motor neuron disease. The innovation involves an AI-based decoder that translates brain activity into a continuous stream of text with high accuracy. The system works by mapping patterns of neuronal activity to strings of words with particular meanings, rather than attempting to read out activity word by word.
Previously, language decoding systems required surgical implants. However, the new breakthrough offers a non-invasive solution. The decoder was developed by neuroscientists at the University of Texas at Austin who used functional magnetic resonance imaging (fMRI) to reconstruct speech from brain activity with uncanny accuracy. This achievement overcomes a fundamental limitation of fMRI, which is that while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real-time impossible.
The team trained the decoder using a large language model, GPT-1, a precursor to OpenAI’s ChatGPT, and used fMRI scans to generate text from brain activity alone. About half of the time, the text closely matched the intended meanings of the original words, and sometimes it precisely matched the meaning. The decoder worked at the level of ideas, semantics, and meaning rather than exact words.
The decoder also showed accuracy when participants watched short silent videos, and their brain activity accurately described some of the content. However, the system struggled with certain aspects of language, including pronouns. The decoder was personalized, and when the model was tested on another person, the readout was unintelligible.
The achievement opens up a host of experimental possibilities, including reading thoughts from someone dreaming or investigating how new ideas spring up from background brain activity. This breakthrough is technically impressive, and it can be a basis for the development of brain-computer interfaces. The team now hopes to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
In the era of social media, post-COVID, and with mental health at the forefront, a shift is taking […]
With its fast speeds and revolutionary potential, 5G stands out as a noteworthy milestone in the field of […]