Neuroscience News logo for mobile.

Decoding Brain Signals: Study Shines Light on Neural Pathways

Summary: Researchers used the simple worm, Caenorhabditis elegans, to gain profound insights into how neural information flows in the brain.

Using advanced techniques like optogenetics, they visually tracked signal flow in real-time, neuron by neuron, to chart its pathways. Contrary to predictions from the worm’s connectome map, they found critical “wireless signals” involving molecular releases affecting neural dynamics.

This groundbreaking research offers a stepping stone to understanding more complex brains.

  • The team studied C. elegans, a transparent worm with 302 neurons, making it an ideal model for mapping brain signal flow.
  • Through pioneering optogenetics, they visualized real-time signaling, uncovering unexpected “wireless signals” using neuropeptides.
  • Their findings challenge existing predictions based on the worm’s connectome, revealing molecular details crucial to understanding neural response.

Source: Princeton

Do we really know how the brain works?

In the last several decades, scientists have made great strides in understanding this fantastically complex organ. Scientists now know a great deal about the brain’s cellular neurobiology and have learned much about the brain’s neural connections, and the components that make up these connections.

This shows neurons.

Despite this, a whole host of important questions remain unanswered and, consequently, the brain continues to be one of science’s great, tantalizing mysteries.

Perhaps one of the most nagging of these questions revolves around our understanding of the brain as a system. Scientists are still largely in the dark about how the brain functions as a network of interacting components, about how all the neural components cooperate, and especially, how information is processed between and among this complex network of neurons.

Now, however, a team of neuroscientists and physicists at Princeton University are helping to shine a clarifying light on how information flows in the brain by studying, of all things, the brain of a very small but ubiquitous worm known as  Caenorhabditis elegans . The details of the experiment are chronicled in a recent issue of  Nature .

The team consisted of Francesco Randi, Sophie Dvali and Anuj Sharma and was led by Andrew Leifer, a neuroscientist and physicist.

“Brains are exciting and mysterious,” said Leifer. “Our team is interested in the question of how collections of neurons process information and generate action.”

Interest in this question has broad implications, Leifer added. Understanding how a network of neurons works is a specific example of a broader class of questions in biological physics, namely, how collective phenomena emerge from networks of interacting cells and molecules.

This area of research has implications for many topics relevant to biological physics as well as contemporary, cutting-edge technologies, such as artificial intelligence.

The first step in answering the question of how information is processed through a network of interacting neurons required that Leifer and his team find a suitable organism that could easily be manipulated in the lab.

This turned out to be  C.   elegans , an unsegmented, non-parasitic nematode, or roundworm, that has been studied by scientists for decades and is considered a “genetically model organism.” Model organisms are commonly used in the laboratory to help scientists understand biological processes because their anatomy, genetics and behaviors are well understood.

The worm is approximately one millimeter in length and is found in many bacteria-rich environments. Especially pertinent to the current study is the fact that the organism has a nervous system of only 302 neurons in its entire body, 188 of which reside in its brain.

“By contrast, a human brain has hundreds of billions of neurons,” said Leifer. “So, these worms are much simpler to study. In fact, these worms are excellent for experimentation because they strike just the right balance between simplicity and complexity.”

Importantly, added Leifer,  C. elegans  was the first organism to have its brain wiring fully “mapped.” This means that scientists have compiled a comprehensive diagram, or “map,” of all its neurons and synapses—the places where neurons physically connect and communicate with other neurons.

This field of endeavor is called “connectomics,” in the parlance of neuroscience, and a diagram of a comprehensive map of neural connections in the brain of an organism is known as a “connectome.” One of the main goals of connectomics is finding out specific nerve connections responsible for particular behaviors.

An additional advantage in using  C. elegans  in laboratory experiments is that the worm is transparent, and, in certain cases, its tissue has been genetically engineered to be light sensitive.

This area of research is known as “optogenetics” and it has revolutionized many aspects of experimentation in biological neuroscience. Instead of the more conventional system of using an electrode to deliver a current into a neuron and thereby stimulate a response, the optogenetic technique involves using light-sensitive proteins from certain organisms and implanting those cells in another organism so that researchers can control an organism’s behavior or responses using light signals.

Similarly, other proteins can be used to light up and report when one neuron signals to another. This means two important things for laboratory experimentation: that an organism will respond to the presence of light, and that a neuron, once it receives a signal from another neuron, will “light up.” This has allowed researchers to study the interaction of neurons visually.

“What is really powerful about this tool is that you can literally turn neurons on and watch them signal in real time,” said Leifer. “In essence, we can convert the problem of measuring and manipulating neural activity to one of collecting and delivering the right light to the right place at the right time.”

These optical tools allowed Leifer’s team to begin the painstaking task of understanding how information flows through the worm’s brain. The goal was to understand how signals flow directly through the worm’s entire brain, so each neuron had to be measured.

This involved isolating one neuron at a time, shining a light on it, so that it was “activated,” and then observing how the other neurons responded.

“For this experiment, we went one neuron at a time through the entire brain, activating or perturbing each neuron and then watching the whole network respond,” said Leifer. “This way, we were able to map out how signals flowed through the network.”

“This was an approach that had never been done before at the scale of an entire brain,” added Leifer.

In all, Leifer and his team performed nearly 10,000 stimulus events by measuring over 23,000 pairs of neurons and their responses, a task that took seven years from conception to completion.

The research conducted by Leifer and his team is thus far the most comprehensive description of how signals flow through the brain. For scientists who study  C. elegans , the researchers provided a lot of information for how specific signals work in the worm’s brain, and it is hoped that this research will provide a plethora of new information that will help advance basic research.

An equally important finding was that a number of the empirical observations Leifer and his team made during the experiment often contradicted the predictions of worm behavior based on mathematical models derived from the worm’s connectome map.

“We concluded that, in many cases, many molecular details that you can’t see from the wiring diagram are actually very important for predicting how the network should respond,” said Leifer.

The researchers suggest that there is a form of signaling—part of the “molecular details that you can’t see”—that does not progress along neural wires. Leifer and his group characterized these as “wireless signals.”

Although wireless signaling is well known among neuroscientists, it has largely been underappreciated for studying neural dynamics because it had often thought to be a process that occurs very slowly.

Wireless signaling is a form of signaling by which a neuron releases molecules, called neuropeptides, into the extracellular space, or “extracellular milieu,” between neurons. These chemicals diffuse and bind to other neurons even if there is no physical connection between them.

Finally, the researchers believe that an important impact of their work is that it allows other neuroscientists studying this and similar phenomena to develop better models with which to understand the brain as a system.

“With our research, we provided a very important piece of the puzzle that was missing,” said Leifer.

Funding: This work was primarily supported by the National Institute of Health New Innovator Award, a National Science Foundation CAREER Award, and an award from the Simons Foundation. Funding was also received from an NSF Physics Frontier Center grant that supports Princeton University’s Center for Physics of Biological Function.

About this neuroscience research news

Author: Catherine Zandonella Source: Princeton Contact: Catherine Zandonella – Princeton Image: The image is credited to Neuroscience News

Original Research: Open access. “ Neural signal propagation atlas of Caenorhabditis elegans ” by Andrew Leifer et al. Nature

Neural signal propagation atlas of Caenorhabditis elegant

Establishing how neural function emerges from network properties is a fundamental problem in neuroscience.

Here, to better understand the relationship between the structure and the function of a nervous system, we systematically measure signal propagation in 23,433 pairs of neurons across the head of the nematode  Caenorhabditis elegans  by direct optogenetic activation and simultaneous whole-brain calcium imaging.

We measure the sign (excitatory or inhibitory), strength, temporal properties and causal direction of signal propagation between these neurons to create a functional atlas. We find that signal propagation differs from model predictions that are based on anatomy.

Using mutants, we show that extrasynaptic signalling not visible from anatomy contributes to this difference. We identify many instances of dense-core-vesicle-dependent signalling, including on timescales of less than a second, that evoke acute calcium transients—often where no direct wired connection exists but where relevant neuropeptides and receptors are expressed. We propose that, in such cases, extrasynaptically released neuropeptides serve a similar function to that of classical neurotransmitters.

Finally, our measured signal propagation atlas better predicts the neural dynamics of spontaneous activity than do models based on anatomy. We conclude that both synaptic and extrasynaptic signalling drive neural dynamics on short timescales, and that measurements of evoked signal propagation are crucial for interpreting neural function.

Neuroscience News Small Logo

Tau May Protect Brain Cells from Oxidative Damage

This shows a person in pain.

Morphine’s Pain Relief Mechanism Unveiled

This shows a dog and speech bubbles.

Dogs Using Soundboard Buttons Understand Words

This shows a woman surrounded by speech bubbles.

How We Recognize Words in Real-Time

A First-of-Its-Kind Signal Was Detected in The Human Brain

Brain cells

Scientists have identified a unique form of cell messaging occurring in the human brain, revealing just how much we still have to learn about its mysterious inner workings.

Excitingly, the discovery hints that our brains might be even more powerful units of computation than we realized.

Back in 2020, researchers from institutes in Germany and Greece reported a mechanism in the brain's outer cortical cells that produces a novel 'graded' signal all on its own, one that could provide individual neurons with another way to carry out their logical functions.

By measuring the electrical activity in sections of tissue removed during surgery on epileptic patients and analyzing their structure using fluorescent microscopy, the neurologists found individual cells in the cortex used not just the usual sodium ions to 'fire', but calcium as well.

This combination of positively charged ions kicked off waves of voltage that had never been seen before, referred to as a calcium-mediated dendritic action potentials, or dCaAPs.

Brains – especially those of the human variety – are often compared to computers. The analogy has its limits , but on some levels they perform tasks in similar ways.

Both use the power of an electrical voltage to carry out various operations. In computers it's in the form of a rather simple flow of electrons through intersections called transistors.

In neurons, the signal is in the form of a wave of opening and closing channels that exchange charged particles such as sodium, chloride, and potassium. This pulse of flowing ions is called an action potential .

Instead of transistors, neurons manage these messages chemically at the end of branches called dendrites.

"The dendrites are central to understanding the brain because they are at the core of what determines the computational power of single neurons," Humboldt University neuroscientist Matthew Larkum told Walter Beckwith at the American Association for the Advancement of Science in January 2020.

Dendrites are the traffic lights of our nervous system. If an action potential is significant enough, it can be passed on to other nerves, which can block or pass on the message.

This is the logical underpinnings of our brain – ripples of voltage that can be communicated collectively in two forms: either an AND message (if x and y are triggered, the message is passed on); or an OR message (if x or y is triggered, the message is passed on).

Arguably, nowhere is this more complex than in the dense, wrinkled outer section of the human central nervous system; the cerebral cortex. The deeper second and third layers are especially thick, packed with branches that carry out high order functions we associate with sensation, thought, and motor control.

It was tissues from these layers that the researchers took a close look at, hooking up cells to a device called a somatodendritic patch clamp to send active potentials up and down each neuron, recording their signals.

"There was a 'eureka' moment when we saw the dendritic action potentials for the first time," said Larkum .

To ensure any discoveries weren't unique to people with epilepsy, they double checked their results in a handful of samples taken from brain tumors.

While the team had carried out similar experiments on rats , the kinds of signals they observed buzzing through the human cells were very different.

More importantly, when they dosed the cells with a sodium channel blocker called tetrodotoxin , they still found a signal. Only by blocking calcium did all fall quiet.

Finding an action-potential mediated by calcium is interesting enough. But modelling the way this sensitive new kind of signal worked in the cortex revealed a surprise.

In addition to the logical AND and OR -type functions, these individual neurons could act as 'exclusive' OR ( XOR ) intersections , which only permit a signal when another signal is graded in a particular fashion.

"Traditionally, the XOR operation has been thought to require a network solution," the researchers wrote .

More work needs to be done to see how dCaAPs behave across entire neurons, and in a living system. Not to mention whether it's a human-thing, or if similar mechanisms have evolved elsewhere in the animal kingdom.

Technology is also looking to our own nervous system for inspiration on how to develop better hardware; knowing our own individual cells have a few more tricks up their sleeves could lead to new ways to network transistors.

Exactly how this new logic tool squeezed into a single nerve cell translates into higher functions is a question for future researchers to answer.

This research was published in Science .

A version of this article was originally published in January 2020.

Score Card Research NoScript

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Study reveals a universal pattern of brain wave frequencies

Press contact :, media download.

Three colorful wavy lines, representing electrical activity, pass through a stylized blue brain.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Three colorful wavy lines, representing electrical activity, pass through a stylized blue brain.

Previous image Next image

Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT and Vanderbilt University neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.

The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.

Video thumbnail

“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.

Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.

“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.

André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears today in Nature Neuroscience . The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.

Layers of activity

The human brain contains billions of neurons, each of which has its own electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.

His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study , led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.

In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its own distinctive combination of cell types and connections with other brain areas.

“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it's been difficult to understand where they are in the context of those layers.”

In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and across species.

Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.

Recording from individual cortical layers has been difficult in the past, because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.

“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”

Across all species, in each region studied, the researchers found the same layered activity pattern.

“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.

Maintaining balance

The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.

“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”

Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low frequency oscillations are too strong and not enough sensory information gets in.

“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”

The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.

The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.

“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.

The research was funded by the U.S. Office of Naval Research, the U.S. National Institutes of Health, the U.S. National Eye Institute, the U.S. National Institute of Mental Health, the Picower Institute, a Simons Center for the Social Brain Postdoctoral Fellowship, and a Canadian Institutes of Health Postdoctoral Fellowship.

Share this news article on:

Related links.

  • Robert Desimone
  • Earl Miller
  • McGovern Institute for Brain Research
  • Department of Brain and Cognitive Sciences

Related Topics

  • Brain and cognitive sciences
  • Neuroscience
  • McGovern Institute
  • Picower Institute
  • National Institutes of Health (NIH)

Related Articles

Photo in which the viewer looks over the shoulder of a young woman as she adjusts a dial on her oven, setting it to 350 degrees

“Spatial computing” enables flexible working memory

MIT neuroscientists have shown that people can enhance their attention by using neurofeedback to decrease alpha waves in one side of the parietal cortex.

Controlling attention with brain waves

brain signals experiment

New study reveals how brain waves control working memory

Screen shots from a video of overlapping images of faces and houses, shown to subjects who were asked to pay attention to one or the other. 

How the brain pays attention

Previous item Next item

More MIT News

Five square slices show glimpse of LLMs, and the final one is green with a thumbs up.

Study: Transparency is often lacking in datasets used to train large language models

Read full story →

Charalampos Sampalis wears a headset while looking at the camera

How MIT’s online resources provide a “highly motivating, even transformative experience”

A small model shows a wooden man in a sparse room, with dramatic lighting from the windows.

Students learn theater design through the power of play

Illustration of 5 spheres with purple and brown swirls. Below that, a white koala with insets showing just its head. Each koala has one purple point on either the forehead, ears, and nose.

A framework for solving parabolic partial differential equations

Feyisayo Eweje wears lab coat and gloves while sitting in a lab.

Designing better delivery for medical therapies

Saeed Miganeh poses standing in a hallway. A street scene is visible through windows in the background

Making a measurable economic impact

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

From Thoughts To Words: How AI Deciphers Neural Signals To Help A Man With ALS speak

"brain-computer interfaces are a groundbreaking technology that can help paralyzed people regain functions they’ve lost.".

man-with-ASL-using-AI-to-speak

Brain-computer interfaces are a groundbreaking technology that can help paralyzed people regain functions they’ve lost, like moving a hand. These devices record signals from the brain and decipher the user’s intended action, bypassing damaged or degraded nerves that would normally transmit those brain signals to control muscles.

Since 2006 , demonstrations of brain-computer interfaces in humans have primarily focused on restoring arm and hand movements by enabling people to control computer cursors or robotic arms . Recently, researchers have begun developing speech brain-computer interfaces to restore communication for people who cannot speak.

As the user attempts to talk, these brain-computer interfaces record the person’s unique brain signals associated with attempted muscle movements for speaking and then translate them into words. These words can then be displayed as text on a screen or spoken aloud using text-to-speech software.

I’m a researcher in the Neuroprosthetics Lab at the University of California, Davis, which is part of the BrainGate2 clinical trial. My colleagues and I recently demonstrated a speech brain-computer interface that deciphers the attempted speech of a man with ALS, or amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease. The interface converts neural signals into text with over 97% accuracy. Key to our system is a set of artificial intelligence language models – artificial neural networks that help interpret natural ones.

Recording Brain Signals

The first step in our speech-brain-computer interface is recording brain signals. There are several sources of brain signals, some of which require surgery to record. Surgically implanted recording devices can capture high-quality brain signals because they are placed closer to neurons, resulting in stronger signals with less interference. These neural recording devices include grids of electrodes placed on the brain’s surface or electrodes implanted directly into brain tissue.

In our study, we used electrode arrays surgically placed in the speech motor cortex, the part of the brain that controls muscles related to speech, of the participant, Casey Harrell. We recorded neural activity from 256 electrodes as Harrell attempted to speak.

Decoding Brain Signals

The next challenge is relating the complex brain signals to the words the user is trying to say.

One approach is to map neural activity patterns directly to spoken words. This method requires recording brain signals corresponding to each word multiple times to identify the average relationship between neural activity and specific words. While this strategy works well for small vocabularies, as demonstrated in a 2021 study with a 50-word vocabulary , it becomes impractical for larger ones. Imagine asking the brain-computer interface user to try to say every word in the dictionary multiple times – it could take months, and it still wouldn’t work for new words.

Instead, we use an alternative strategy: mapping brain signals to phonemes, the basic units of sound that make up words. In English, there are 39 phonemes, including ch, er, oo, pl and sh, that can be combined to form any word. We can measure the neural activity associated with every phoneme multiple times just by asking the participant to read a few sentences aloud. By accurately mapping neural activity to phonemes, we can assemble them into any English word, even ones the system wasn’t explicitly trained with.

To map brain signals to phonemes, we use advanced machine learning models. These models are particularly well-suited for this task due to their ability to find patterns in large amounts of complex data that would be impossible for humans to discern. Think of these models as super-smart listeners who can pick out important information from noisy brain signals, much like you might focus on a conversation in a crowded room. Using these models, we were able to decipher phoneme sequences during attempted speech with over 90% accuracy.

The brain-computer interface uses a clone of Casey Harrell’s voice to read aloud the text it deciphers from his neural activity.

From Phonemes to Words

Once we have the deciphered phoneme sequences, we need to convert them into words and sentences. This is challenging, especially if the deciphered phoneme sequence isn’t perfectly accurate. To solve this puzzle, we use two complementary types of machine learning language models.

The first is n-gram language models, which predict which word is most likely to follow a set of n words. We trained a 5-gram, or five-word, language model on millions of sentences to to predict the likelihood of a word based on the previous four words, capturing local context and common phrases. For example, after “I am very good,” it might suggest “today” as more likely than “potato.” Using this model, we convert our phoneme sequences into the 100 most likely word sequences, each with an associated probability.

The second is large language models, which power AI chatbots and also predict which words most likely follow others. We use large language models to refine our choices. These models, trained on vast amounts of diverse text, have a broader understanding of language structure and meaning. They help us determine which of our 100 candidate sentences makes the most sense in a wider context.

By carefully balancing probabilities from the n-gram model, the large language model, and our initial phoneme predictions, we can make a highly educated guess about what the brain-computer interface user is trying to say. This multi-step process allows us to handle the uncertainties in phoneme decoding and produce coherent, contextually appropriate sentences.

Real-World Benefits

In practice, this speech-decoding strategy has been remarkably successful. We’ve enabled Casey Harrell, a man with ALS, to “speak” with over 97% accuracy using just his thoughts. This breakthrough allows him to easily converse with his family and friends for the first time in years, all in the comfort of his own home.

Speech brain-computer interfaces represent a significant step forward in restoring communication. As we continue to refine these devices, they hold the promise of giving a voice to those who have lost the ability to speak, reconnecting them with their loved ones and the world around them.

However, challenges remain, such as making the technology more accessible, portable, and durable over years of use. Despite these hurdles, speech-brain-computer interfaces are a powerful example of how science and technology can come together to solve complex problems and dramatically improve people’s lives.

Nicholas Card is a postdoctoral fellow in neuroscience and neuro-engineering at the University of California, Davis. This article is republished from The Conversation under a Creative Commons license . Read the original article .

  • artificial intelligence
  • brain structure & function

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

A man in a wheelchair with two wires attached to his head looks at text on a computer screen

From thoughts to words: How AI deciphers neural signals to help a man with ALS speak

brain signals experiment

Postdoctoral Fellow of Neuroscience and Neuroengineering, University of California, Davis

Disclosure statement

Nicholas Card does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

University of California, Davis provides funding as a member of The Conversation US.

View all partners

Brain-computer interfaces are a groundbreaking technology that can help paralyzed people regain functions they’ve lost, like moving a hand. These devices record signals from the brain and decipher the user’s intended action, bypassing damaged or degraded nerves that would normally transmit those brain signals to control muscles.

Since 2006 , demonstrations of brain-computer interfaces in humans have primarily focused on restoring arm and hand movements by enabling people to control computer cursors or robotic arms . Recently, researchers have begun developing speech brain-computer interfaces to restore communication for people who cannot speak.

As the user attempts to talk, these brain-computer interfaces record the person’s unique brain signals associated with attempted muscle movements for speaking and then translate them into words. These words can then be displayed as text on a screen or spoken aloud using text-to-speech software.

I’m a reseacher in the Neuroprosthetics Lab at the University of California, Davis, which is part of the BrainGate2 clinical trial. My colleagues and I recently demonstrated a speech brain-computer interface that deciphers the attempted speech of a man with ALS, or amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease. The interface converts neural signals into text with over 97% accuracy. Key to our system is a set of artificial intelligence language models – artificial neural networks that help interpret natural ones.

Recording brain signals

The first step in our speech brain-computer interface is recording brain signals. There are several sources of brain signals, some of which require surgery to record. Surgically implanted recording devices can capture high-quality brain signals because they are placed closer to neurons, resulting in stronger signals with less interference. These neural recording devices include grids of electrodes placed on the brain’s surface or electrodes implanted directly into brain tissue.

In our study, we used electrode arrays surgically placed in the speech motor cortex, the part of the brain that controls muscles related to speech, of the participant, Casey Harrell. We recorded neural activity from 256 electrodes as Harrell attempted to speak.

A small square device with an array of spikes on the bottom and a bundle of wires on the top

Decoding brain signals

The next challenge is relating the complex brain signals to the words the user is trying to say.

One approach is to map neural activity patterns directly to spoken words. This method requires recording brain signals corresponding to each word multiple times to identify the average relationship between neural activity and specific words. While this strategy works well for small vocabularies, as demonstrated in a 2021 study with a 50-word vocabulary , it becomes impractical for larger ones. Imagine asking the brain-computer interface user to try to say every word in the dictionary multiple times – it could take months, and it still wouldn’t work for new words.

Instead, we use an alternative strategy: mapping brain signals to phonemes, the basic units of sound that make up words. In English, there are 39 phonemes, including ch, er, oo, pl and sh, that can be combined to form any word. We can measure the neural activity associated with every phoneme multiple times just by asking the participant to read a few sentences aloud. By accurately mapping neural activity to phonemes, we can assemble them into any English word, even ones the system wasn’t explicitly trained with.

To map brain signals to phonemes, we use advanced machine learning models. These models are particularly well-suited for this task due to their ability to find patterns in large amounts of complex data that would be impossible for humans to discern. Think of these models as super-smart listeners that can pick out important information from noisy brain signals, much like you might focus on a conversation in a crowded room. Using these models, we were able to decipher phoneme sequences during attempted speech with over 90% accuracy.

From phonemes to words

Once we have the deciphered phoneme sequences, we need to convert them into words and sentences. This is challenging, especially if the deciphered phoneme sequence isn’t perfectly accurate. To solve this puzzle, we use two complementary types of machine learning language models.

The first is n-gram language models, which predict which word is most likely to follow a set of n words. We trained a 5-gram, or five-word, language model on millions of sentences to predict the likelihood of a word based on the previous four words, capturing local context and common phrases. For example, after “I am very good,” it might suggest “today” as more likely than “potato”. Using this model, we convert our phoneme sequences into the 100 most likely word sequences, each with an associated probability.

The second is large language models, which power AI chatbots and also predict which words most likely follow others. We use large language models to refine our choices. These models, trained on vast amounts of diverse text, have a broader understanding of language structure and meaning. They help us determine which of our 100 candidate sentences makes the most sense in a wider context.

By carefully balancing probabilities from the n-gram model, the large language model and our initial phoneme predictions, we can make a highly educated guess about what the brain-computer interface user is trying to say. This multistep process allows us to handle the uncertainties in phoneme decoding and produce coherent, contextually appropriate sentences.

Diagram showing a man, his brain, wires and a computer screen

Real-world benefits

In practice, this speech decoding strategy has been remarkably successful. We’ve enabled Casey Harrell, a man with ALS, to “speak” with over 97% accuracy using just his thoughts. This breakthrough allows him to easily converse with his family and friends for the first time in years, all in the comfort of his own home.

Speech brain-computer interfaces represent a significant step forward in restoring communication. As we continue to refine these devices, they hold the promise of giving a voice to those who have lost the ability to speak, reconnecting them with their loved ones and the world around them.

However, challenges remain, such as making the technology more accessible, portable and durable over years of use. Despite these hurdles, speech brain-computer interfaces are a powerful example of how science and technology can come together to solve complex problems and dramatically improve people’s lives.

  • Artificial intelligence (AI)
  • Machine learning
  • Brain-computer interface
  • Neural signals
  • Language models

brain signals experiment

Director of STEM

brain signals experiment

Community member - Training Delivery and Development Committee (Volunteer part-time)

brain signals experiment

Chief Executive Officer

brain signals experiment

Finance Business Partner

brain signals experiment

Head of Evidence to Action

X

  • Latest news
  • UCL in the media
  • Services for media
  • Student news
  • Tell us your story

Menu

Prioritising the unexpected: new brain mechanism uncovered

28 August 2024

Neuroscientists at UCL have shown how an animal’s brain reacts to seeing something unexpected by prioritising the surprising sensory information.

Neurons in the mouse visual cortex

The researchers discovered how two brain areas, the neocortex and the thalamus, work together to detect discrepancies between what animals expect from their environment and actual events. The brain areas selectively boost, or prioritise, any unexpected sensory information.

These findings enhance our understanding of predictive processing in the brain and could offer insights into how brain circuits are altered in autism spectrum disorders and schizophrenia spectrum disorders.

The research, published today in Nature, outlines how scientists at the Sainsbury Wellcome Centre at UCL studied mice in a virtual reality environment to take us a step closer to understanding both the nature of prediction error (the discrepancy between expectations and reality) signals in the brain as well as the mechanisms by which they arise.

Lead author Professor Sonja Hofer, Group Leader at SWC, said: “Our brains constantly predict what to expect in the world around us and the consequences of our actions. When these predictions turn out wrong, this causes strong activation of different brain areas, and such prediction error signals are important for helping us learn from our mistakes and update our predictions.

"But despite their importance, surprisingly little is known about the neural circuit mechanisms responsible for their implementation in the brain.”

To study how the brain processes expected and unexpected events, the researchers placed mice in a virtual reality environment where they could navigate along a familiar corridor to get to a reward. The virtual environment enabled the team to precisely control visual input and introduce unexpected images on the walls. By using a technique called two-photon calcium imaging, the researchers were able to record the neural activity from many individual neurons in the primary visual cortex, the first area in the brain’s neocortex to receive visual information from the eyes.

First author Dr Shohei Furutachi (Sainsbury Wellcome Centre at UCL) said: “Previous theories proposed that prediction error signals encode how the actual visual input is different from expectations, but surprisingly we found no experimental evidence for this. Instead, we discovered that the brain boosts the responses of neurons that have the strongest preference for the unexpected visual input. The error signal we observe is a consequence of this selective amplification of visual information.

“This implies that our brain detects discrepancies between predictions and actual inputs to make unexpected events more salient.”

To understand how the brain generates this amplification of the unexpected sensory input in the visual cortex, the team used a technique called optogenetics to inactivate or activate different groups of neurons. They found two groups of neurons that were important for causing the prediction error signal in the visual cortex: vasoactive intestinal polypeptide (VIP)-expressing inhibitory interneurons in V1 and a thalamic brain region called the pulvinar, which integrates information from many neocortical and subcortical areas and is strongly connected to V1. But the researchers found that these two groups of neurons interact in a surprising way.

Dr Furutachi explained: “Often in neuroscience we focus on studying one brain region or pathway at a time. But coming from a molecular biology background, I was fascinated by how different molecular pathways synergistically interact to enable flexible and contextual regulation. I decided to test the possibility that cooperation could be occurring at the level of neural circuits, between VIP neurons and the pulvinar.”

And indeed, Dr Furutachi’s work revealed that VIP neurons and pulvinar act synergistically together. VIP neurons act like a switch board: when they are off, the pulvinar suppresses activity in the neocortex, but when VIP neurons are on, the pulvinar can strongly and selectively boost sensory responses in the neocortex. The cooperative interaction of these two pathways thus mediates the sensory prediction error signals in visual cortex.

The next steps for the team are to explore how and where in the brain the animals’ predictions are compared with the actual sensory input to compute sensory prediction errors and how prediction error signals drive learning. They are also exploring how their findings could help contribute to understanding autism spectrum disorders (ASDs) and schizophrenia spectrum disorders (SSDs).

Dr Furutachi added: “It has been proposed that ASDs and SSDs both can be explained by an imbalance in the prediction error system. We are now trying to apply our discovery to ASDs and SSDs model animals to study the mechanistic neural circuit underpinnings of these disorders.”

This research was funded by the Gatsby Charity Foundation and Wellcome alongside support from the European Research Council, the SNSF and Biozentrum.

  • Research paper in Nature
  • Professor Sonia Hofer’s academic profile
  • Dr Shohei Furutachi’s academic profile
  • Sainsbury Wellcome Centre at UCL
  • Neurons in the mouse visual cortex, with VIP neurons in magenta

Media contacts

tel: +44 20 7679 9222  / +44 (0) 7717 728648

E: chris.lane [at] ucl.ac.uk

April Cashin-Garbutt

Head of Research Communications and Engagement, Sainsbury Wellcome Centre at UCL

T: +44 (0)20 3108 8028

E: a.cashin-garbutt [at] ucl.ac.uk 

UCL Facebook page

  • Institute Faculty
  • Postdocs, Visiting Sci, Students
  • Research & Institute Staff
  • Outreach Staff
  • Institute Leadership
  • DEI Statement
  • Our Research
  • Key Areas of Research
  • Research Seminars
  • Our History with MEG
  • Future Use of MEG
  • New MEG Course
  • The Bill Henningsgaard Brain Studio
  • Game-Changing Research
  • Get Involved
  • Outreach & Education
  • The Latest Research
  • Partnering with the Community
  • I-LABS Training & Resource Library
  • I-LABS on Equity
  • Learning Modules
  • Resources for Parents & Caregivers
  • Resources for Providers & Educators
  • Resources for Policymakers
  • Making a Gift
  • Our Supporters
  • News from I-LABS
  • Media Coverage

Two Human Brains Linked, Play ’20 Questions’

I-LABS September 13, 2015 Media Coverage , Publication , Research

brain signals experiment

In the latest advance in brain-to-brain communication, I-LABS researchers demonstrate how two brains collaboratively problem solve.

human brain image

University of Washington researchers recently used a direct brain-to-brain connection to enable pairs of participants to play a question-and-answer game by transmitting signals from one brain to the other over the Internet.

The experiment, detailed September 23 in PLOS ONE, is thought to be the first to show that two brains can be directly linked to allow one person to accurately guess what’s on another person’s mind.

“This is the most complex brain-to-brain experiment, I think, that’s been done to date in humans,” said lead author  Andrea Stocco , a faculty member at I-LABS.

“It uses conscious experiences through signals that are experienced visually, and it requires two people to collaborate,” Stocco said.

Co-author  Chantel Prat,  an I-LABS faculty member, added: “They have to interpret something they’re seeing with their brains.”

Stocco and Prat describe the experiment in a video:

Previously, the researchers have demonstrated the transfer of motor information through the brain-to-brain interface, a finding described in a  2014 research paper  in PLOS ONE.

The research team, which includes Rajesh Rao, a UW professor of computer science and engineering, is now exploring the possibility of “brain tutoring,” transferring signals directly from healthy brains to ones that are developmentally impaired or impacted by external factors such as a stroke or accident, or simply to transfer knowledge from teacher to pupil.

The projected is funded by a grant from the  W.M. Keck Foundation .

Read the  university news release . Read the research paper in  PLOS ONE .

Selected Media Coverage CNN International The Guardian Xinhua News Agency NBC News U.S. News & World Report Fortune Newsweek Gizmodo Seattle Times KIRO News io9   

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 26 August 2024

Neural populations in the language network differ in the size of their temporal receptive windows

  • Tamar I. Regev   ORCID: orcid.org/0000-0003-0639-0890 1 , 2   na1 ,
  • Colton Casto   ORCID: orcid.org/0000-0001-6966-1470 1 , 2 , 3 , 4   na1 ,
  • Eghbal A. Hosseini 1 , 2 ,
  • Markus Adamek   ORCID: orcid.org/0000-0001-8519-9212 5 , 6 ,
  • Anthony L. Ritaccio 7 ,
  • Jon T. Willie   ORCID: orcid.org/0000-0001-9565-4338 5 , 6 ,
  • Peter Brunner   ORCID: orcid.org/0000-0002-2588-2754 5 , 6 , 8 &
  • Evelina Fedorenko   ORCID: orcid.org/0000-0003-3823-514X 1 , 2 , 3  

Nature Human Behaviour ( 2024 ) Cite this article

694 Accesses

82 Altmetric

Metrics details

Despite long knowing what brain areas support language comprehension, our knowledge of the neural computations that these frontal and temporal regions implement remains limited. One important unresolved question concerns functional differences among the neural populations that comprise the language network. Here we leveraged the high spatiotemporal resolution of human intracranial recordings ( n  = 22) to examine responses to sentences and linguistically degraded conditions. We discovered three response profiles that differ in their temporal dynamics. These profiles appear to reflect different temporal receptive windows, with average windows of about 1, 4 and 6 words, respectively. Neural populations exhibiting these profiles are interleaved across the language network, which suggests that all language regions have direct access to distinct, multiscale representations of linguistic input—a property that may be critical for the efficiency and robustness of language processing.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

brain signals experiment

Similar content being viewed by others

brain signals experiment

The cortical representation of language timescales is shared between reading and listening

brain signals experiment

Brains and algorithms partially converge in natural language processing

brain signals experiment

Semantic encoding during language comprehension at single-cell resolution

Data availability.

Preprocessed data, all stimuli and statistical results, as well as selected additional analyses are available on OSF at https://osf.io/xfbr8/ (ref. 37 ). Raw data may be provided upon request to the corresponding authors and institutional approval of a data-sharing agreement.

Code availability

Code used to conduct analyses and generate figures from the preprocessed data is available publicly on GitHub at https://github.com/coltoncasto/ecog_clustering_PUBLIC (ref. 93 ). The VERA software suite used to perform electrode localization can also be found on GitHub at https://github.com/neurotechcenter/VERA (ref. 82 ).

Fedorenko, E., Hsieh, P. J., Nieto-Castañón, A., Whitfield-Gabrieli, S. & Kanwisher, N. New method for fMRI investigations of language: defining ROIs functionally in individual subjects. J. Neurophysiol. 104 , 1177–1194 (2010).

Article   PubMed   PubMed Central   Google Scholar  

Pallier, C., Devauchelle, A. D. & Dehaene, S. Cortical representation of the constituent structure of sentences. Proc. Natl Acad. Sci. USA 108 , 2522–2527 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Regev, M., Honey, C. J., Simony, E. & Hasson, U. Selective and invariant neural responses to spoken and written narratives. J. Neurosci. 33 , 15978–15988 (2013).

Scott, T. L., Gallée, J. & Fedorenko, E. A new fun and robust version of an fMRI localizer for the frontotemporal language system. Cogn. Neurosci. 8 , 167–176 (2017).

Article   PubMed   Google Scholar  

Diachek, E., Blank, I., Siegelman, M., Affourtit, J. & Fedorenko, E. The domain-general multiple demand (MD) network does not support core aspects of language comprehension: a large-scale fMRI investigation. J. Neurosci. 40 , 4536–4550 (2020).

Malik-Moraleda, S. et al. An investigation across 45 languages and 12 language families reveals a universal language network. Nat. Neurosci. 25 , 1014–1019 (2022).

Fedorenko, E., Behr, M. K. & Kanwisher, N. Functional specificity for high-level linguistic processing in the human brain. Proc. Natl Acad. Sci. USA 108 , 16428–16433 (2011).

Monti, M. M., Parsons, L. M. & Osherson, D. N. Thought beyond language: neural dissociation of algebra and natural language. Psychol. Sci. 23 , 914–922 (2012).

Deen, B., Koldewyn, K., Kanwisher, N. & Saxe, R. Functional organization of social perception and cognition in the superior temporal sulcus. Cereb. Cortex 25 , 4596–4609 (2015).

Ivanova, A. A. et al. The language network is recruited but not required for nonverbal event semantics. Neurobiol. Lang. 2 , 176–201 (2021).

Article   Google Scholar  

Chen, X. et al. The human language system, including its inferior frontal component in “Broca’s area,” does not support music perception. Cereb. Cortex 33 , 7904–7929 (2023).

Fedorenko, E., Ivanova, A. A. & Regev, T. I. The language network as a natural kind within the broader landscape of the human brain. Nat. Rev. Neurosci. 25 , 289–312 (2024).

Article   CAS   PubMed   Google Scholar  

Okada, K. & Hickok, G. Identification of lexical-phonological networks in the superior temporal sulcus using functional magnetic resonance imaging. Neuroreport 17 , 1293–1296 (2006).

Graves, W. W., Grabowski, T. J., Mehta, S. & Gupta, P. The left posterior superior temporal gyrus participates specifically in accessing lexical phonology. J. Cogn. Neurosci. 20 , 1698–1710 (2008).

DeWitt, I. & Rauschecker, J. P. Phoneme and word recognition in the auditory ventral stream. Proc. Natl Acad. Sci. USA 109 , E505–E514 (2012).

Price, C. J., Moore, C. J., Humphreys, G. W. & Wise, R. J. S. Segregating semantic from phonological processes during reading. J. Cogn. Neurosci. 9 , 727–733 (1997).

Mesulam, M. M. et al. Words and objects at the tip of the left temporal lobe in primary progressive aphasia. Brain 136 , 601–618 (2013).

Friederici, A. D. The brain basis of language processing: from structure to function. Physiol. Rev. 91 , 1357–1392 (2011).

Hagoort, P. On Broca, brain, and binding: a new framework. Trends Cogn. Sci. 9 , 416–423 (2005).

Grodzinsky, Y. & Santi, A. The battle for Broca’s region. Trends Cogn. Sci. 12 , 474–480 (2008).

Matchin, W. & Hickok, G. The cortical organization of syntax. Cereb. Cortex 30 , 1481–1498 (2020).

Fedorenko, E., Blank, I. A., Siegelman, M. & Mineroff, Z. Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition 203 , 104348 (2020).

Bautista, A. & Wilson, S. M. Neural responses to grammatically and lexically degraded speech. Lang. Cogn. Neurosci. 31 , 567–574 (2016).

Anderson, A. J. et al. Deep artificial neural networks reveal a distributed cortical network encoding propositional sentence-level meaning. J. Neurosci. 41 , 4100–4119 (2021).

Regev, T. I. et al. High-level language brain regions process sublexical regularities. Cereb. Cortex 34 , bhae077 (2024).

Mukamel, R. & Fried, I. Human intracranial recordings and cognitive neuroscience. Annu. Rev. Psychol. 63 , 511–537 (2011).

Fedorenko, E. et al. Neural correlate of the construction of sentence meaning. Proc. Natl Acad. Sci. USA 113 , E6256–E6262 (2016).

Nelson, M. J. et al. Neurophysiological dynamics of phrase-structure building during sentence processing. Proc. Natl Acad. Sci. USA 114 , E3669–E3678 (2017).

Woolnough, O. et al. Spatiotemporally distributed frontotemporal networks for sentence reading. Proc. Natl Acad. Sci. USA 120 , e2300252120 (2023).

Desbordes, T. et al. Dimensionality and ramping: signatures of sentence integration in the dynamics of brains and deep language models. J. Neurosci. 43 , 5350–5364 (2023).

Goldstein, A. et al. Shared computational principles for language processing in humans and deep language models. Nat. Neurosci. 25 , 369–380 (2022).

Lerner, Y., Honey, C. J., Silbert, L. J. & Hasson, U. Topographic mapping of a hierarchy of temporal receptive windows using a narrated story. J. Neurosci. 31 , 2906–2915 (2011).

Blank, I. A. & Fedorenko, E. No evidence for differences among language regions in their temporal receptive windows. Neuroimage 219 , 116925 (2020).

Jain, S. et al. Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech. In NeurIPS Proc. Advances in Neural Information Processing Systems 33 (NeurIPS 2020) (eds Larochelle, H. et al.) 1–12 (NeurIPS, 2020).

Fedorenko, E., Nieto-Castañon, A. & Kanwisher, N. Lexical and syntactic representations in the brain: an fMRI investigation with multi-voxel pattern analyses. Neuropsychologia 50 , 499–513 (2012).

Shain, C. et al. Distributed sensitivity to syntax and semantics throughout the human language network. J. Cogn. Neurosci. 36 , 1427–1471 (2024).

Regev, T. I., Casto, C. & Fedorenko, E. Neural populations in the language network differ in the size of their temporal receptive windows. OSF osf.io/xfbr8 (2024).

Stelzer, J., Chen, Y. & Turner, R. Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): random permutations and cluster size control. Neuroimage 65 , 69–82 (2013).

Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164 , 177–190 (2007).

Hasson, U., Yang, E., Vallines, I., Heeger, D. J. & Rubin, N. A hierarchy of temporal receptive windows in human cortex. J. Neurosci. 28 , 2539–2550 (2008).

Norman-Haignere, S. V. et al. Multiscale temporal integration organizes hierarchical computation in human auditory cortex. Nat. Hum. Behav. 6 , 455–469 (2022).

Overath, T., McDermott, J. H., Zarate, J. M. & Poeppel, D. The cortical analysis of speech-specific temporal structure revealed by responses to sound quilts. Nat. Neurosci. 18 , 903–911 (2015).

Keshishian, M. et al. Joint, distributed and hierarchically organized encoding of linguistic features in the human auditory cortex. Nat. Hum. Behav. 7 , 740–753 (2023).

Braga, R. M., DiNicola, L. M., Becker, H. C. & Buckner, R. L. Situating the left-lateralized language network in the broader organization of multiple specialized large-scale distributed networks. J. Neurophysiol. 124 , 1415–1448 (2020).

Fedorenko, E. & Blank, I. A. Broca’s area is not a natural kind. Trends Cogn. Sci. 24 , 270–284 (2020).

Dick, F. et al. Language deficits, localization, and grammar: evidence for a distributive model of language breakdown in aphasic patients and neurologically intact individuals. Psychol. Rev. 108 , 759–788 (2001).

Runyan, C. A., Piasini, E., Panzeri, S. & Harvey, C. D. Distinct timescales of population coding across cortex. Nature 548 , 92–96 (2017).

Murray, J. D. et al. A hierarchy of intrinsic timescales across primate cortex. Nat. Neurosci. 17 , 1661–1663 (2014).

Chien, H. S. & Honey, C. J. Constructing and forgetting temporal context in the human cerebral cortex. Neuron 106 , 675–686 (2020).

Jacoby, N. & Fedorenko, E. Discourse-level comprehension engages medial frontal Theory of Mind brain regions even for expository texts. Lang. Cogn. Neurosci. 35 , 780–796 (2018).

Caucheteux, C., Gramfort, A. & King, J. R. Evidence of a predictive coding hierarchy in the human brain listening to speech. Nat. Hum. Behav. 7 , 430–441 (2023).

Chang, C. H. C., Nastase, S. A. & Hasson, U. Information flow across the cortical timescale hierarchy during narrative construction. Proc. Natl Acad. Sci. USA 119 , e2209307119 (2022).

Bozic, M., Tyler, L. K., Ives, D. T., Randall, B. & Marslen-Wilson, W. D. Bihemispheric foundations for human speech comprehension. Proc. Natl Acad. Sci. USA 107 , 17439–17444 (2010).

Paulk, A. C. et al. Large-scale neural recordings with single neuron resolution using Neuropixels probes in human cortex. Nat. Neurosci. 25 , 252–263 (2022).

Leonard, M. K. et al. Large-scale single-neuron speech sound encoding across the depth of human cortex. Nature 626 , 593–602 (2024).

Evans, N. & Levinson, S. C. The myth of language universals: language diversity and its importance for cognitive science. Behav. Brain Sci. 32 , 429–448 (2009).

Shannon, C. E. Communication in the presence of noise. Proc. IRE 37 , 10–21 (1949).

Levy, R. Expectation-based syntactic comprehension. Cognition 106 , 1126–1177 (2008).

Levy, R. A noisy-channel model of human sentence comprehension under uncertain input. In Proc. 2008 Conference on Empirical Methods in Natural Language Processing (eds Lapata, M. & Ng, H. T.) 234–243 (Association for Computational Linguistics, 2008).

Gibson, E., Bergen, L. & Piantadosi, S. T. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation. Proc. Natl Acad. Sci. USA 110 , 8051–8056 (2013).

Keshev, M. & Meltzer-Asscher, A. Noisy is better than rare: comprehenders compromise subject–verb agreement to form more probable linguistic structures. Cogn. Psychol. 124 , 101359 (2021).

Gibson, E. et al. How efficiency shapes human language. Trends Cogn. Sci. 23 , 389–407 (2019).

Tuckute, G., Kanwisher, N. & Fedorenko, E. Language in brains, minds, and machines. Annu. Rev. Neurosci. https://doi.org/10.1146/annurev-neuro-120623-101142 (2024).

Norman-Haignere, S., Kanwisher, N. G. & McDermott, J. H. Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition. Neuron 88 , 1281–1296 (2015).

Baker, C. I. et al. Visual word processing and experiential origins of functional selectivity in human extrastriate cortex. Proc. Natl Acad. Sci. USA 104 , 9087–9092 (2007).

Buckner, R. L. & DiNicola, L. M. The brain’s default network: updated anatomy, physiology and evolving insights. Nat. Rev. Neurosci. 20 , 593–608 (2019).

Saxe, R., Brett, M. & Kanwisher, N. Divide and conquer: a defense of functional localizers. Neuroimage 30 , 1088–1096 (2006).

Baldassano, C. et al. Discovering event structure in continuous narrative perception and memory. Neuron 95 , 709–721 (2017).

Wilson, S. M. et al. Recovery from aphasia in the first year after stroke. Brain 146 , 1021–1039 (2023).

Piantadosi, S. T., Tily, H. & Gibson, E. Word lengths are optimized for efficient communication. Proc. Natl Acad. Sci. USA 108 , 3526–3529 (2011).

Shain, C., Blank, I. A., Fedorenko, E., Gibson, E. & Schuler, W. Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex. J. Neurosci. 42 , 7412–7430 (2022).

Schrimpf, M. et al. The neural architecture of language: integrative modeling converges on predictive processing. Proc. Natl Acad. Sci. USA 118 , e2105646118 (2021).

Tuckute, G. et al. Driving and suppressing the human language network using large language models. Nat. Hum. Behav. 8 , 544–561 (2024).

Mollica, F. & Piantadosi, S. T. Humans store about 1.5 megabytes of information during language acquisition. R. Soc. Open Sci. 6 , 181393 (2019).

Skrill, D. & Norman-Haignere, S. V. Large language models transition from integrating across position-yoked, exponential windows to structure-yoked, power-law windows. Adv. Neural Inf. Process. Syst. 36 , 638–654 (2023).

Giglio, L., Ostarek, M., Weber, K. & Hagoort, P. Commonalities and asymmetries in the neurobiological infrastructure for language production and comprehension. Cereb. Cortex 32 , 1405–1418 (2022).

Hu, J. et al. Precision fMRI reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cereb. Cortex 33 , 4384–4404 (2023).

Lee, E. K., Brown-Schmidt, S. & Watson, D. G. Ways of looking ahead: hierarchical planning in language production. Cognition 129 , 544–562 (2013).

Wechsler, D. Wechsler abbreviated scale of intelligence (WASI) [Database record]. APA PsycTests https://psycnet.apa.org/doi/10.1037/t15170-000 (APA PsycNet, 1999).

Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N. & Wolpaw, J. R. BCI2000: a general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 51 , 1034–1043 (2004).

Adamek, M., Swift, J. R. & Brunner, P. VERA - Versatile Electrode Localization Framework. Zenodo https://doi.org/10.5281/zenodo.7486842 (2022).

Adamek, M., Swift, J. R. & Brunner, P. VERA - A Versatile Electrode Localization Framework (Version 1.0.0). GitHub https://github.com/neurotechcenter/VERA (2022).

Avants, B. B., Epstein, C. L., Grossman, M. & Gee, J. C. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12 , 26–41 (2008).

Janca, R. et al. Detection of interictal epileptiform discharges using signal envelope distribution modelling: application to epileptic and non-epileptic intracranial recordings. Brain Topogr. 28 , 172–183 (2015).

Dichter, B. K., Breshears, J. D., Leonard, M. K. & Chang, E. F. The control of vocal pitch in human laryngeal motor cortex. Cell 174 , 21–31 (2018).

Ray, S., Crone, N. E., Niebur, E., Franaszczuk, P. J. & Hsiao, S. S. Neural correlates of high-gamma oscillations (60–200 Hz) in macaque local field potentials and their potential implications in electrocorticography. J. Neurosci. 28 , 11526–11536 (2008).

Lipkin, B. et al. Probabilistic atlas for the language network based on precision fMRI data from >800 individuals. Sci. Data 9 , 529 (2022).

Kučera, H. Computational Analysis of Present-day American English (Univ. Pr. of New England, 1967).

Kaufman, L. & Rousseeuw, P. J. in Finding Groups in Data: An Introduction to Cluster Analysis (eds L. Kaufman, L. & Rousseeuw, P. J.) Ch. 2 (Wiley, 1990).

Rokach, L. & Maimon, O. in The Data Mining and Knowledge Discovery Handbook (eds Maimon, O. & Rokach, L.) 321–352 (Springer, 2005).

Wilkinson, G.N. & Rogers, C.E. Symbolic description of factorial models for analysis of variance. J. R. Stat. Soc., C: Appl.Stat. 22 , 392–399 (1973).

Google Scholar  

Luke, S. G. Evaluating significance in linear mixed-effects models in R. Behav. Res. Methods 49 , 1494–1502 (2017).

Regev, T. I. et al. Neural populations in the language network differ in the size of their temporal receptive windows. GitHub https://github.com/coltoncasto/ecog_clustering_PUBLIC (2024).

Download references

Acknowledgements

We thank the participants for agreeing to take part in our study, as well as N. Kanwisher, former and current EvLab members, especially C. Shain and A. Ivanova, and the audience at the Neurobiology of Language conference (2022, Philadelphia) for helpful discussions and comments on the analyses and manuscript. T.I.R. was supported by the Zuckerman-CHE STEM Leadership Program and by the Poitras Center for Psychiatric Disorders Research. C.C. was supported by the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. A.L.R. was supported by NIH award U01-NS108916. J.T.W. was supported by NIH awards R01-MH120194 and P41-EB018783, and the American Epilepsy Society Research and Training Fellowship for clinicians. P.B. was supported by NIH awards R01-EB026439, U24-NS109103, U01-NS108916, U01-NS128612 and P41-EB018783, the McDonnell Center for Systems Neuroscience, and Fondazione Neurone. E.F. was supported by NIH awards R01-DC016607, R01-DC016950 and U01-NS121471, and research funds from the McGovern Institute for Brain Research, Brain and Cognitive Sciences Department, and the Simons Center for the Social Brain. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

These authors contributed equally: Tamar I. Regev, Colton Casto.

Authors and Affiliations

Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA

Tamar I. Regev, Colton Casto, Eghbal A. Hosseini & Evelina Fedorenko

McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA

Program in Speech and Hearing Bioscience and Technology (SHBT), Harvard University, Boston, MA, USA

Colton Casto & Evelina Fedorenko

Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Allston, MA, USA

Colton Casto

National Center for Adaptive Neurotechnologies, Albany, NY, USA

Markus Adamek, Jon T. Willie & Peter Brunner

Department of Neurosurgery, Washington University School of Medicine, St Louis, MO, USA

Department of Neurology, Mayo Clinic, Jacksonville, FL, USA

Anthony L. Ritaccio

Department of Neurology, Albany Medical College, Albany, NY, USA

Peter Brunner

You can also search for this author in PubMed   Google Scholar

Contributions

T.I.R. and C.C. equally contributed to study conception and design, data analysis and interpretation of results, and manuscript writing. E.A.H. contributed to data analysis and manuscript editing; M.A. to data collection and analysis; A.L.R., J.T.W. and P.B. to data collection and manuscript editing. E.F. contributed to study conception and design, supervision, interpretation of results and manuscript writing.

Corresponding authors

Correspondence to Tamar I. Regev , Colton Casto or Evelina Fedorenko .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Human Behaviour thanks Nima Mesgarani, Jonathan Venezia and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 dataset 1 k-medoids (k = 3) cluster assignments by participant..

Average cluster responses as in Fig. 2e grouped by participant. Shaded areas around the signal reflect a 99% confidence interval over electrodes. The number of electrodes constructing the average (n) is denoted above each signal in parenthesis. Prototypical responses for each of the three clusters were found in nearly all participants individually. However, for participants with only a few electrodes assigned to a given cluster (for example, P5 Cluster 3), the responses were more variable.

Extended Data Fig. 2 Dataset 1 k-medoids clustering with k = 10.

a) Clustering mean electrode responses (S + W + J + N) using k-medoids with k = 10 and a correlation-based distance. Shading of the data matrix reflects normalized high-gamma power (70–150 Hz). b) Electrode responses visualized on their first two principal components, colored by cluster. c) Timecourses of best representative electrodes (‘medoids’) selected by the algorithm from each of the ten clusters. d) Timecourses averaged across all electrodes in each cluster. Shaded areas around the signal reflect a 99% confidence interval over electrodes. Correlation with the k = 3 cluster averages are shown to the right of the timecourses. Many clusters exhibited high correlations with the k = 3 response profiles from Fig. 2 .

Extended Data Fig. 3 All Dataset 1 responses.

a-c) All Dataset 1 electrode responses. The timecourses (concatenated across the four conditions, ordered: sentences, word lists, Jabberwocky sentences, non-word lists) of all electrodes in Dataset 1 sorted by their correlation to the cluster medoid (medoid shown at the bottom of each cluster). Colors reflect the reliability of the measured neural signal, computed by correlating responses to odd and even trials (Fig. 1d ). The estimated temporal receptive window (TRW) using the toy model from Fig. 4 is displayed to the left, and the participant who contributed the electrode is displayed to the right. There was strong consistency in the responses from individual electrodes within a cluster (with more variability in the less reliable electrodes), and electrodes with responses that were more similar to the cluster medoid tended to be more reliable (more pink). Note that there were two reliable response profiles (relatively pink) that showed a pattern that was distinct from the three prototypical response profiles: One electrode in Cluster 2 (the 10th electrode from the top in panel B) responded only to the onset of the first word/nonword in each trial; and one electrode in Cluster 3 (the 4th electrode from the top in panel C) was highly locked to all onsets except the first word/nonword. These profiles indicate that although the prototypical clusters explain a substantial amount of the functional heterogeneity of responses in the language network, they were not the only observed responses.

Extended Data Fig. 4 Partial correlations of individual response profiles with each of the cluster medoids.

a) Pearson correlations of all response profiles with each of the cluster medoids, grouped by cluster assignment. b) Partial correlations ( Methods ) of all response profiles with each of the cluster medoids, controlling for the other two cluster medoids, grouped by cluster assignment. c) Response profiles from electrodes assigned to Cluster 1 that had a high partial correlation ( > 0.2, arbitrarily defined threshold) with the Cluster 2 medoid (and split-half reliability>0.3). Top: Average over all electrodes that met these criteria (n = 18, black). The Cluster 1 medoid is shown in red, and the Cluster 2 medoid is shown in green. Bottom: Four sample electrodes (black). d) Response profiles assigned to Cluster 2 that had a high partial correlation ( > 0.2, arbitrarily defined threshold) with the Cluster 1 medoid (and split-half reliability>0.3). Top: Average over all electrodes that meet these criteria (n = 12, black). The Cluster 1 medoid is shown in red, and the Cluster 2 medoid is shown in green. Bottom: Four sample electrodes (black; see osf.io/xfbr8/ for all mixed response profiles with split-half reliability>0.3). e) Anatomical distribution of electrodes in Dataset 1 colored by their partial correlation with a given cluster medoid (controlling for the other two medoids). Cluster-1- and Cluster-2-like responses were present throughout frontal and temporal areas (with Cluster 1 responses having a slightly higher concentration in the temporal pole and Cluster 2 responses having a slightly higher concentration in the superior temporal gyrus (STG)), whereas Cluster-3-like responses were localized to the posterior STG.

Extended Data Fig. 5 N-gram frequencies of sentences and word lists diverge with n-gram length.

N-gram frequencies were extracted from the Google n-gram online platform ( https://books.google.com/ngrams/ ), averaging across Google books corpora between the years 2010 and 2020. For each individual word, the n-gram frequency for n = 1 was the frequency of that word in the corpus; for n = 2 it was the frequency of the bigram (sequence of 2 words) ending in that word; for n = 3 it was the frequency of the trigram (sequence of 3 words) ending in that word; and so on. Sequences that were not found in the corpus were assigned a value of 0. Results are only presented until n = 4 because for n > 4 most of the string sequences, both from the Sentence and Word-list conditions, were not found in the corpora. The plot shows that the difference between the log n-gram values for the sentences and word lists in our stimulus set grows as a function of N. Error bars represent the standard error of the mean across all n-grams extracted from the stimuli used (640, 560, 480, 399 n-grams for n-gram length = 1, 2, 3, and 4, respectively).

Extended Data Fig. 6 Temporal receptive window (TRW) estimates with kernels of different shapes.

The toy TRW model from Fig. 4 was applied using five different kernel shapes: cosine ( a ), ‘wide’ Gaussian (Gaussian curves with a standard deviation of σ /2 that were truncated at +/− 1 standard deviation, as used in Fig. 4 ; b ), ‘narrow’ Gaussian (Gaussian curves with a standard deviation of σ /16 that were truncated at +/− 8 standard deviations; c ), a square (that is, boxcar) function (1 for the entire window; d ) and a linear asymmetric function (linear function with a value of 0 initially and a value of 1 at the end of the window; e ). For each kernel ( a-e ), the plots represent (left to right, all details are identical to Fig. 4 in the manuscript): 1) The kernel shapes for TRW = 1, 2, 3, 4, 6 and 8 words, superimposed on the simplified stimulus train; 2) The simulated neural signals for each of those TRWs; 3) violin plots of best fitted TRW values across electrodes (each dot represents an electrode, horizontal black lines are means across the electrodes, white dots are medians, vertical thin box represents lower and upper quartiles and ‘x’ marks indicate outliers; more than 1.5 interquartile ranges above the upper quartile or less than 1.5 interquartile ranges below the lower quartile) for all electrodes (black), or electrodes from only Clusters 1 (red) 2 (green) or 3 (blue); and 4) Estimated TRW as a function of goodness of fit. Each dot is an electrode, its size represents the reliability of its neural response, computed via correlation between the mean signals when using only odd vs. only even trials, x-axis is the electrode’s best fitted TRW, y-axis is the goodness of fit, computed via correlation between the neural signal and the closest simulated signal. For all kernels the TRWs showed a decreasing trend from Cluster 1 to 3.

Extended Data Fig. 7 Dataset 1 k-medoids clustering results with only S and N conditions.

a) Search for optimal k using the ‘elbow method’. Top: variance (sum of the distances of all electrodes to their assigned cluster centre) normalized by the variance when k = 1 as a function of k (normalized variance (NV)). Bottom: change in NV as a function of k (NV(k + 1) – NV(k)). After k = 3 the change in variance became more moderate, suggesting that 3 clusters appropriately described Dataset 1 when using only the responses to sentences and non-words (as was the case when all four conditions were used). b) Clustering mean electrode responses (only S and N, importantly) using k-medoids (k = 3) with a correlation-based distance. Shading of the data matrix reflects normalized high-gamma power (70–150 Hz). c) Average timecourse by cluster. Shaded areas around the signal reflect a 99% confidence interval over electrodes (n = 99, n = 61, and n = 17 electrodes for Cluster 1, 2, and 3, respectively). Clusters 1-3 showed a strong similarity to the clusters reported in Fig. 2 . d) Mean condition responses by cluster. Error bars reflect standard error of the mean over electrodes. e) Electrode responses visualized on their first two principal components, colored by cluster. f) Anatomical distribution of clusters across all participants (n = 6). g) Robustness of clusters to electrode omission (random subsets of electrodes were removed in increments of 5). Stars reflect significant similarity with the full dataset (with a p threshold of 0.05; evaluated with a one-sided permutation test, n = 1000 permutations; Methods ). Shaded regions reflect standard error of the mean over randomly sampled subsets of electrodes. Relative to when all conditions were used, Cluster 2 was less robust to electrode omission (although still more robust than Cluster 3), suggesting that responses to word lists and Jabberwocky sentences (both not present here) are particularly important for distinguishing Cluster 2 electrodes from Cluster 1 and 3 electrodes.

Extended Data Fig. 8 Dataset 2 electrode assignment to most correlated Dataset 1 cluster under ‘winner-take-all’ (WTA) approach.

a) Assigning electrodes from Dataset 2 to the most correlated cluster from Dataset 1. Assignment was performed using the correlation with the Dataset 1 cluster average, not the cluster medoid. Shading of the data matrix reflects normalized high-gamma power (70–150 Hz). b) Average timecourse by group. Shaded areas around the signal reflect a 99% confidence interval over electrodes (n = 142, n = 95, and n = 125 electrodes for groups 1, 2, and 3, respectively). c) Mean condition responses by group. Error bars reflect standard error of the mean over electrodes (n = 142, n = 95, and n = 125 electrodes for groups 1, 2, and 3, respectively, as in b ). d) Electrode responses visualized on their first two principal components, colored by group. e) Anatomical distribution of groups across all participants (n = 16). f-g) Comparison of cluster assignment of electrodes from Dataset 2 using clustering vs. winner-take-all (WTA) approach. f) The numbers in the matrix correspond to the number of electrodes assigned to cluster y during clustering (y-axis) versus the number electrodes assigned to group x during the WTA approach (x-axis). For instance, there were 44 electrodes that were assigned to Cluster 1 during clustering but were ‘pulled out’ to Group 2 (the analog of Cluster 2) during the WTA approach. The total number of electrodes assigned to each cluster during the clustering approach are shown to the right of each row. The total number of electrodes assigned to each group during the WTA approach are shown at the top of each column. N = 362 is the total number of electrodes in Dataset 2. g) Similar to F , but here the average timecourse across all electrodes assigned to the corresponding cluster/group during both procedures is presented. Shaded areas around the signals reflect a 99% confidence interval over electrodes.

Extended Data Fig. 9 Anatomical distribution of the clusters in Dataset 2.

a) Anatomical distribution of language-responsive electrodes in Dataset 2 across all subjects in MNI space, colored by cluster. Only Clusters 1 and 3 (those from Dataset 1 that replicate to Dataset 2) are shown. b) Anatomical distribution of language-responsive electrodes in subject-specific space for eight sample participants. c-h) Violin plots of MNI coordinate values for Clusters 1 and 3 in the left and right hemisphere ( c-e and f-h , respectively), where plotted points (n = 16 participants) represent the mean of all coordinate values for a given participant and cluster. The mean across participants is plotted with a black horizontal line, and the median is shown with a white circle. Vertical thin black boxes within violins plots represent the upper and lower quartiles. Significance is evaluated with a LME model ( Methods , Supplementary Tables 3 and 4 ). The Cluster 3 posterior bias from Dataset 1 was weakly present but not statistically reliable.

Extended Data Fig. 10 Estimation of temporal receptive window (TRW) sizes for electrodes in Dataset 2.

As in Fig. 4 but for electrodes in Dataset 2. a) Best TRW fit (using the toy model from Fig. 4 ) for all electrodes, colored by cluster (when k-medoids clustering with k = 3 was applied, Fig. 6 ) and sized by the reliability of the neural signal as estimated by correlating responses to odd and even trials (Fig. 6c ). The ‘goodness of fit’, or correlation between the simulated and observed neural signal (Sentence condition only), is shown on the y-axis. b) Estimated TRW sizes across all electrodes (grey) and per cluster (red, green, and blue). Black vertical lines correspond to the mean window size and the white dots correspond to the median. ‘x’ marks indicate outliers (more than 1.5 interquartile ranges above the upper quartile or less than 1.5 interquartile ranges below the lower quartile). Significance values were calculated using a linear mixed-effects model (comparing estimate values, two-sided ANOVA for LME, Methods , see Supplementary Table 8 for exact p-values). c-d) Same as A and B , respectively, except that clusters were assigned by highest correlation with Dataset 1 clusters (Extended Data Fig. 8 ). Under this procedure, Cluster 2 reliably separated from Cluster 3 in terms of its TRW (all ps<0.001, evaluated with a LME model, Methods , see Supplementary Table 9 for exact p-values).

Supplementary information

Supplementary information.

Supplementary Tables 1–11.

Reporting Summary

Peer review file, rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Regev, T.I., Casto, C., Hosseini, E.A. et al. Neural populations in the language network differ in the size of their temporal receptive windows. Nat Hum Behav (2024). https://doi.org/10.1038/s41562-024-01944-2

Download citation

Received : 16 March 2023

Accepted : 03 July 2024

Published : 26 August 2024

DOI : https://doi.org/10.1038/s41562-024-01944-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

brain signals experiment

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

brain signals experiment

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Exploring inner speech recognition via cross-perception approach in eeg and fmri.

brain signals experiment

1. Introduction

  • We propose a novel cross-perception model that effectively integrates EEG and fMRI data for inner speech recognition.
  • We introduce a multigranularity encoding scheme that captures both temporal and spatial aspects of brain activity during inner speech.
  • We develop an adaptive fusion mechanism that dynamically weights the contributions of different modalities based on their relevance to the recognition task.
  • We provide extensive experimental results and analyses, demonstrating the superiority of our multimodal approach over unimodal baselines.

2. Related Work

2.1. unimodal approaches, 2.2. bimodal approaches, 2.3. limitations of existing approaches, 3. proposed targeted improvements, 3.1. eeg signal processing enhancements, singular spectrum analysis (ssa) for eeg decomposition, 3.2. fmri data processing enhancements, 3.3. multimodal fusion strategy, 3.4. cross-modal contrastive learning, 3.5. theoretical framework, 4. experiment setup, 4.1. baseline methods, 4.1.1. unimodal methods.

  • EEG-SVM: Support Vector Machine classifier using time–frequency features from EEG data.
  • EEG-RF: Random Forest classifier using wavelet coefficients from EEG data.
  • fMRI-MVPA: Multivoxel Pattern Analysis using a linear SVM on fMRI data.
  • fMRI-3DCNN: 3D Convolutional Neural Network on fMRI data.

4.1.2. Existing Multimodal Methods

  • EEG-fMRI-Concat: Simple concatenation of EEG and fMRI features with an SVM classifier.
  • EEG-fMRI-CCA: Canonical Correlation Analysis for feature fusion of EEG and fMRI data.
  • MM-CNN: Multimodal Convolutional Neural Network for EEG and fMRI fusion.

5.1. Main Results

5.2. ablation study, 5.3. cross-participant generalization, 5.4. extended study, 6. discussion, 7. limitations, 8. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Alderson-Day, B.; Fernyhough, C. Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology. Psychol. Bull. 2015 , 141 , 931–965. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Anumanchipalli, G.K.; Chartier, J.; Chang, E.F. Speech Synthesis from Neural Decoding of Spoken Sentences. Nature 2019 , 568 , 493–498. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Martin, S.; Iturrate, I.; Millán, J.d.R.; Knight, R.T.; Pasley, B.N. Decoding Inner Speech Using Electrocorticography: Progress and Challenges Toward a Speech Prosthesis. Front. Neurosci. 2018 , 12 , 422. [ Google Scholar ] [ CrossRef ]
  • Huster, R.J.; Debener, S.; Eichele, T.; Herrmann, C.S. Methods for Simultaneous EEG-fMRI: An Introductory Review. J. Neurosci. 2012 , 32 , 6053–6060. [ Google Scholar ] [ CrossRef ]
  • Cooney, C.; Folli, R.; Coyle, D. Optimizing Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 1311–1316. [ Google Scholar ] [ CrossRef ]
  • Agarwal and Kumar(2024) EEG-based Imagined Words Classification using Hilbert Transform and Deep Networks. Multimed. Tools Appl. 2024 , 83 , 2725–2748. [ CrossRef ]
  • Porbadnigk, A.; Wester, M.; Calliess, J.; Schultz, T. EEG-Based Speech Recognition—Impact of Temporal Effects. In Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing—Volume 1: BIOSIGNALS, (BIOSTEC 2009), Porto, Portugal, 14–17 January 2009; INSTICC, SciTePress: Setúbal, Portugal, 2009; pp. 376–381. [ Google Scholar ] [ CrossRef ]
  • Nguyen, C.H.; Karavas, G.K.; Artemiadis, P. Inferring imagined speech using EEG signals: A new approach using Riemannian manifold features. J. Neural Eng. 2017 , 15 , 016002. [ Google Scholar ] [ CrossRef ]
  • Lee, Y.E.; Lee, S.H.; Kim, S.H.; Lee, S.W. Towards Voice Reconstruction from EEG during Imagined Speech. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 6030–6038. [ Google Scholar ] [ CrossRef ]
  • Lopes da Silva, F. EEG and MEG: Relevance to Neuroscience. Neuron 2013 , 80 , 1112–1128. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gu, J.; Buidze, T.; Zhao, K.; Gläscher, J.; Fu, X. The neural network of sensory attenuation: A neuroimaging meta-analysis. Psychon. Bull. Rev. 2024 . [ Google Scholar ] [ CrossRef ]
  • Sun, J.; Li, M.; Chen, Z.; Zhang, Y.; Wang, S.; Moens, M.F. Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S., Eds.; Curran Associates, Inc.: New York, NY, USA, 2023; Volume 36, pp. 12332–12348. [ Google Scholar ]
  • Cai, H.; Dong, J.; Mei, L.; Feng, G.; Li, L.; Wang, G.; Yan, H. Functional and structural abnormalities of the speech disorders: A multimodal activation likelihood estimation meta-analysis. Cereb. Cortex 2024 , 34 , bhae075. [ Google Scholar ] [ CrossRef ]
  • Takagi, Y.; Nishimoto, S. High-Resolution Image Reconstruction with Latent Diffusion Models from Human Brain Activity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 14453–14463. [ Google Scholar ]
  • Gong, P.; Jia, Z.; Wang, P.; Zhou, Y.; Zhang, D. ASTDF-Net: Attention-Based Spatial-Temporal Dual-Stream Fusion Network for EEG-Based Emotion Recognition. In Proceedings of the 31st ACM International Conference on Multimedia (MM’23), Ottawa, ON, Canada, 29 October–3 November 2023; pp. 883–892. [ Google Scholar ] [ CrossRef ]
  • Su, W.C.; Dashtestani, H.; Miguel, H.O.; Condy, E.; Buckley, A.; Park, S.; Perreault, J.B.; Nguyen, T.; Zeytinoglu, S.; Millerhagen, J.; et al. Simultaneous multimodal fNIRS-EEG recordings reveal new insights in neural activity during motor execution, observation, and imagery. Sci. Rep. 2023 , 13 , 5151. [ Google Scholar ] [ CrossRef ]
  • Passos, L.A.; Papa, J.P.; Del Ser, J.; Hussain, A.; Adeel, A. Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement. Inf. Fusion 2023 , 90 , 1–11. [ Google Scholar ] [ CrossRef ]
  • Goebel, R.; Esposito, F. The Added Value of EEG-fMRI in Imaging Neuroscience. In EEG—fMRI: Physiological Basis, Technique, and Applications ; Mulert, C., Lemieux, L., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 119–138. [ Google Scholar ] [ CrossRef ]
  • Carmichael, D.W.; Vulliemoz, S.; Murta, T.; Chaudhary, U.; Perani, S.; Rodionov, R.; Rosa, M.J.; Friston, K.J.; Lemieux, L. Measurement of the Mapping between Intracranial EEG and fMRI Recordings in the Human Brain. Bioengineering 2024 , 11 , 224. [ Google Scholar ] [ CrossRef ]
  • Koide-Majima, N.; Nishimoto, S.; Majima, K. Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation. Neural Netw. 2024 , 170 , 349–363. [ Google Scholar ] [ CrossRef ]
  • Liwicki, F.S.; Gupta, V.; Saini, R.; De, K.; Abid, N.; Rakesh, S.; Wellington, S.; Wilson, H.; Liwicki, M.; Eriksson, J. Bimodal Electroencephalography-Functional Magnetic Resonance Imaging Dataset for Inner-Speech Recognition. Sci. Data 2023 , 10 , 378. [ Google Scholar ] [ CrossRef ]
  • Miyawaki, Y.; Uchida, H.; Yamashita, O.; Sato, M.a.; Morito, Y.; Tanabe, H.C.; Sadato, N.; Kamitani, Y. Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders. Neuron 2008 , 60 , 915–929. [ Google Scholar ] [ CrossRef ]
  • Cetron, J.S.; Connolly, A.C.; Diamond, S.G.; May, V.V.; Haxby, J.V.; Kraemer, D.J.M. Decoding individual differences in STEM learning from functional MRI data. Nat. Commun. 2019 , 10 , 2027. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sligte, I.G.; van Moorselaar, D.; Vandenbroucke, A.R.E. Decoding the Contents of Visual Working Memory: Evidence for Process-Based and Content-Based Working Memory Areas? J. Neurosci. 2013 , 33 , 1293–1294. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Herff, C.; Krusienski, D.J.; Kubben, P. The Potential of Stereotactic-EEG for Brain-Computer Interfaces: Current Progress and Future Directions. Front. Neurosci. 2020 , 14 , 123. [ Google Scholar ] [ CrossRef ]
  • Gao, J.; Li, P.; Chen, Z.; Zhang, J. A Survey on Deep Learning for Multimodal Data Fusion. Neural Comput. 2020 , 32 , 829–864. [ Google Scholar ] [ CrossRef ]
  • Aggarwal, S.; Chugh, N. Review of Machine Learning Techniques for EEG Based Brain Computer Interface. Arch. Comput. Methods Eng. 2022 , 29 , 3001–3020. [ Google Scholar ] [ CrossRef ]
  • Zadeh, A.B.; Liang, P.P.; Poria, S.; Cambria, E.; Morency, L.P. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, 15–20 July 2018; pp. 2236–2246. [ Google Scholar ]
  • Liu, Z.; Shen, Y.; Lakshminarasimhan, V.B.; Liang, P.P.; Zadeh, A.; Morency, L.P. Efficient low-rank multimodal fusion with modality-specific factors. arXiv 2018 , arXiv:1806.00064. [ Google Scholar ]
  • Tsai, Y.H.H.; Bai, S.; Liang, P.P.; Kolter, J.Z.; Morency, L.P.; Salakhutdinov, R. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; NIH Public Access: Bethesda, MD, USA, 2019; Volume 2019, p. 6558. [ Google Scholar ]
  • Yu, W.; Xu, H.; Yuan, Z.; Wu, J. Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 10790–10797. [ Google Scholar ]
  • Han, W.; Chen, H.; Poria, S. Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online, 7–11 November 2021; pp. 9180–9192. [ Google Scholar ]
  • Yuan, Z.; Li, W.; Xu, H.; Yu, W. Transformer-based feature reconstruction network for robust multimodal sentiment analysis. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 4400–4407. [ Google Scholar ]
  • Sun, Y.; Mai, S.; Hu, H. Learning to learn better unimodal representations via adaptive multimodal meta-learning. IEEE Trans. Affect. Comput. 2023 , 14 , 2209–2223. [ Google Scholar ] [ CrossRef ]
  • Liu, F.; Shen, S.Y.; Fu, Z.W.; Wang, H.Y.; Zhou, A.M.; Qi, J.Y. Lgcct: A light gated and crossed complementation transformer for multimodal speech emotion recognition. Entropy 2022 , 24 , 1010. [ Google Scholar ] [ CrossRef ]
  • Sun, L.; Lian, Z.; Liu, B.; Tao, J. Efficient multimodal transformer with dual-level feature restoration for robust multimodal sentiment analysis. IEEE Trans. Affect. Comput. 2024 , 15 , 309–325. [ Google Scholar ] [ CrossRef ]
  • Fu, Z.; Liu, F.; Xu, Q.; Fu, X.; Qi, J. LMR-CBT: Learning modality-fused representations with CB-transformer for multimodal emotion recognition from unaligned multimodal sequences. Front. Comput. Sci. 2024 , 18 , 184314. [ Google Scholar ] [ CrossRef ]
  • Wang, L.; Peng, J.; Zheng, C.; Zhao, T.; Zhu, L. A cross modal hierarchical fusion multimodal sentiment analysis method based on multi-task learning. Inf. Process. Manag. 2024 , 61 , 103675. [ Google Scholar ] [ CrossRef ]
  • Shi, H.; Pu, Y.; Zhao, Z.; Huang, J.; Zhou, D.; Xu, D.; Cao, J. Co-space Representation Interaction Network for multimodal sentiment analysis. Knowl.-Based Syst. 2024 , 283 , 111149. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

AspectDescription
DatasetBimodal Dataset on Inner Speech
Participants4 healthy, right-handed (3 females, 1 male, aged 33–51 years)
TasksTwo 4-class classification tasks:
1. Social category: child, daughter, father, wife
2. Numeric category: four, three, ten, six
Data TypesNon-simultaneous EEG and fMRI recordings
PreprocessingEEG: Bandpass filter (1–50 Hz), artifact removal via ICA
fMRI: Motion correction, slice timing correction, spatial normalization to MNI space
Validation Strategy5-fold cross-validation
Evaluation MetricsAccuracy, F1-score, Area Under the Receiver Operating Characteristic Curve (AUC-ROC)
MethodAcc ↑ (%)F1-Score ↑AUC-ROC ↑
EEG-SVM28.5 ± 2.10.27 ± 0.020.32 ± 0.01
EEG-RF30.2 ± 1.80.29 ± 0.020.34 ± 0.01
fMRI-MVPA35.8 ± 1.50.35 ± 0.010.38 ± 0.01
fMRI-3DCNN38.3 ± 1.30.37 ± 0.010.40 ± 0.01
EEG-fMRI-Concat40.5 ± 1.20.40 ± 0.010.42 ± 0.01
EEG-fMRI-CCA42.1 ± 1.00.41 ± 0.010.49 ± 0.01
MM-CNN44.7 ± 0.90.44 ± 0.010.55 ± 0.00
Our Method
MethodAcc ↑ (%)F1-Score ↑AUC-ROC ↑
EEG-SVM17.8 ± 2.20.16 ± 0.020.21 ± 0.01
EEG-RF19.5 ± 1.90.18 ± 0.020.23 ± 0.01
fMRI-MVPA24.9 ± 1.60.24 ± 0.020.31 ± 0.01
fMRI-3DCNN27.6 ± 1.40.26 ± 0.010.33 ± 0.01
EEG-fMRI-Concat29.8 ± 1.30.29 ± 0.010.39 ± 0.01
EEG-fMRI-CCA29.3 ± 1.10.30 ± 0.010.41 ± 0.01
MM-CNN33.9 ± 1.00.33 ± 0.010.44 ± 0.00
Our Method
Model VariantAcc ↑ (%)F1-Score ↑AUC-ROC ↑
Full Model47.2 ± 0.70.47 ± 0.010.56 ± 0.00
w/o EEG-Raw45.0 ± 0.80.45 ± 0.010.54 ± 0.01
w/o EEG-MTF44.5 ± 0.90.44 ± 0.010.53 ± 0.01
w/o fMRI43.7 ± 0.90.43 ± 0.010.52 ± 0.01
w/o Cross-Perception43.9 ± 0.80.44 ± 0.010.53 ± 0.01
w/o Adaptive Fusion45.3 ± 0.80.45 ± 0.010.55 ± 0.01
Model VariantAcc ↑ (%)F1-Score ↑AUC-ROC ↑
Full Model36.5 ± 0.80.36 ± 0.010.45 ± 0.00
w/o EEG-Raw34.4 ± 0.90.34 ± 0.010.43 ± 0.01
w/o EEG-MTF33.9 ± 1.00.33 ± 0.010.42 ± 0.01
w/o fMRI33.2 ± 1.00.33 ± 0.010.41 ± 0.01
w/o Cross-Perception33.4 ± 0.90.33 ± 0.010.42 ± 0.01
w/o Adaptive Fusion34.7 ± 0.90.34 ± 0.010.44 ± 0.01
TaskOur Model Accuracy (%)Best Baseline Accuracy (%)
Social Words47.2 ± 0.747.3 ± 0.1
Numeric Words36.5 ± 0.836.6 ± 0.1
MethodCMU-MOSEICMU-MOSI
Acc-7 ↑ (%) Acc-5 ↑ (%) Acc-2 ↑ (%) MAE ↓ Acc-7 ↑ (%) Acc-5 ↑ (%) Acc-2 ↑ (%) MAE ↓
TFN (2018) [ ]50.2-82.50.59334.9-80.80.901
LMF (2018) [ ]48.0-82.00.62333.2-82.50.917
Mult (2019) [ ]52.654.183.50.56440.446.783.40.846
Self-MM (2021) [ ]53.655.485.00.53346.452.884.60.717
MMIM (2021) [ ]53.255.085.00.53646.953.085.30.712
TFR-Net (2021) [ ]52.354.383.50.55146.153.284.00.721
AMML (2022) [ ]52.4-85.30.61446.3-84.90.723
LGCCT (2022) [ ]47.5-81.1-----
EMT (2023) [ ]54.556.386.00.52747.454.185.00.705
LMR-CBT (2024) [ ]51.9-82.7-41.4-83.10.774
CMHFM (2024) [ ]52.854.484.50.54837.242.481.70.907
CRNet (2024) [ ]53.8-86.40.54147.4-86.40.712
Ours
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Qin, J.; Zong, L.; Liu, F. Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI. Appl. Sci. 2024 , 14 , 7720. https://doi.org/10.3390/app14177720

Qin J, Zong L, Liu F. Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI. Applied Sciences . 2024; 14(17):7720. https://doi.org/10.3390/app14177720

Qin, Jiahao, Lu Zong, and Feng Liu. 2024. "Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI" Applied Sciences 14, no. 17: 7720. https://doi.org/10.3390/app14177720

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

News releases   |   Research   |   Science   |   Technology

September 23, 2015

UW team links two human brains for question-and-answer experiment

News and Information

University of Washington graduate student Jose Ceballos wears an electroencephalography (EEG) cap that records brain activity and sends a response to a second participant over the Internet.

University of Washington graduate student Jose Ceballos wears an electroencephalography (EEG) cap that records brain activity and sends a response to a second participant over the Internet. University of Washington

Imagine a question-and-answer game played by two people who are not in the same place and not talking to each other. Round after round, one player asks a series of questions and accurately guesses the object the other is thinking about.

Sci-fi? Mind-reading superpowers? Not quite.

University of Washington researchers recently used a direct brain-to-brain connection to enable pairs of participants to play a question-and-answer game by transmitting signals from one brain to the other over the Internet. The experiment, detailed today in PLOS ONE , is thought to be the first to show that two brains can be directly linked to allow one person to guess what’s on another person’s mind.

“This is the most complex brain-to-brain experiment, I think, that’s been done to date in humans,” said lead author Andrea Stocco , an assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences .

Here’s how it works: The first participant, or “respondent,” wears a cap connected to an electroencephalography (EEG) machine that records electrical brain activity. The respondent is shown an object (for example, a dog) on a computer screen, and the second participant, or “inquirer,” sees a list of possible objects and associated questions. With the click of a mouse, the inquirer sends a question and the respondent answers “yes” or “no” by focusing on one of two flashing LED lights attached to the monitor, which flash at different frequencies.

A “no” or “yes” answer both send a signal to the inquirer via the Internet and activate a magnetic coil positioned behind the inquirer’s head. But only a “yes” answer generates a response intense enough to stimulate the visual cortex and cause the inquirer to see a flash of light known as a “ phosphene .” The phosphene — which might look like a blob, waves or a thin line — is created through a brief disruption in the visual field and tells the inquirer the answer is yes. Through answers to these simple yes or no questions, the inquirer identifies the correct item.

The experiment was carried out in dark rooms in two UW labs located almost a mile apart and involved five pairs of participants, who played 20 rounds of the question-and-answer game. Each game had eight objects and three questions that would solve the game if answered correctly. The sessions were a random mixture of 10 real games and 10 control games that were structured the same way.

The researchers took steps to ensure participants couldn’t use clues other than direct brain communication to complete the game. Inquirers wore earplugs so they couldn’t hear the different sounds produced by the varying stimulation intensities of the “yes” and “no” responses. Since noise travels through the skull bone, the researchers also changed the stimulation intensities slightly from game to game and randomly used three different intensities each for “yes” and “no” answers to further reduce the chance that sound could provide clues.

University of Washington postdoctoral student Caitlin Hudac wears a cap that uses transcranial magnetic stimulation (TMG) to deliver brain signals from the other participant.

University of Washington postdoctoral student Caitlin Hudac wears a cap that uses transcranial magnetic stimulation (TMG) to deliver brain signals from the other participant. University of Washington

The researchers also repositioned the coil on the inquirer’s head at the start of each game, but for the control games, added a plastic spacer undetectable to the participant that weakened the magnetic field enough to prevent the generation of phosphenes. Inquirers were not told whether they had correctly identified the items, and only the researcher on the respondent end knew whether each game was real or a control round.

“We took many steps to make sure that people were not cheating,” Stocco said.

Participants were able to guess the correct object in 72 percent of the real games, compared with just 18 percent of the control rounds. Incorrect guesses in the real games could be caused by several factors, the most likely being uncertainty about whether a phosphene had appeared.

“They have to interpret something they’re seeing with their brains,” said co-author Chantel Prat , a faculty member at the Institute for Learning & Brain Sciences and a UW associate professor of psychology. “It’s not something they’ve ever seen before.”

Errors can also result from respondents not knowing the answers to questions or focusing on both answers, or by the brain signal transmission being interrupted by hardware problems.

“While the flashing lights are signals that we’re putting into the brain, those parts of the brain are doing a million other things at any given time too,” Prat said.

The study builds on the UW team’s initial experiment in 2013, when it was the first to demonstrate a direct brain-to-brain connection between humans. Other scientists have connected the brains of rats and monkeys, and transmitted brain signals from a human to a rat, using electrodes inserted into animals’ brains. In the 2013 experiment, the UW team used noninvasive technology to send a person’s brain signals over the Internet to control the hand motions of another person.

University of Washington researchers Andrea Stocco, left, and Chantel Prat, who in 2013 were part of a UW team that was the first to demonstrate a direct brain-to-brain connection between two humans.

University of Washington researchers Andrea Stocco, left, and Chantel Prat, who in 2013 were part of a UW team that was the first to demonstrate a direct brain-to-brain connection between two humans.

The experiment evolved out of research by co-author Rajesh Rao , a UW professor of computer science and engineering, on brain-computer interfaces that enable people to activate devices with their minds. In 2011, Rao began collaborating with Stocco and Prat to determine how to link two human brains together.

In 2014, the researchers received a $1 million grant from the W.M. Keck Foundation that allowed them to broaden their experiments to decode more complex interactions and brain processes. They are now exploring the possibility of “brain tutoring,” transferring signals directly from healthy brains to ones that are developmentally impaired or impacted by external factors such as a stroke or accident, or simply to transfer knowledge from teacher to pupil.

The team is also working on transmitting brain states — for example, sending signals from an alert person to a sleepy one, or from a focused student to one who has attention deficit hyperactivity disorder, or ADHD.

“Imagine having someone with ADHD and a neurotypical student,” Prat said. “When the non-ADHD student is paying attention, the ADHD student’s brain gets put into a state of greater attention automatically.”

Many technological advancements over the past century, from the telegraph to the Internet, were created to facilitate communication between people. The UW team’s work takes a different approach, using technology to strip away the need for such intermediaries.

“Evolution has spent a colossal amount of time to find ways for us and other animals to take information out of our brains and communicate it to other animals in the forms of behavior, speech and so on,” Stocco said. “But it requires a translation. We can only communicate part of whatever our brain processes.

“What we are doing is kind of reversing the process a step at a time by opening up this box and taking signals from the brain and with minimal translation, putting them back in another person’s brain,” he said.

Other co-authors are UW computer science and neurobiology undergraduate student Darby Losey , UW bioengineering doctoral student Jeneva Cronin, UW bioengineering doctoral student Joseph Wu, and Justin Abernethy , a research assistant at the UW Institute for Learning & Brain Sciences.

News releases

Read more news releases

Search UW News

Artificial intelligence, wildfires and smoke, latest news releases.

brain signals experiment

UW Today Newsletter

UW Today Daily

UW Today Week in Review

For UW employees

Be boundless, connect with us:.

© 2024 University of Washington | Seattle, WA

  • Share full article

Advertisement

Supported by

In a First, Experiment Links Brains of Two Rats

By James Gorman

  • Feb. 28, 2013

In an experiment that sounds straight out of a science fiction movie, a Duke neuroscientist has connected the brains of two rats in such a way that when one moves to press a lever, the other one does, too — most of the time.

The neuroscientist, Miguel Nicolelis, known for successfully demonstrating brain-machine connections, like the one in which a monkey controlled a robotic arm with its thoughts , said this was the first time one animal’s brain had been linked to another.

The question, he said, was: “Could we fool the brain? Could we make the brain process signals from another body?” The answer, he said, was yes.

He and other scientists at Duke, and in Brazil, published the results of the experiment in the journal Scientific Reports. The work received mixed reviews from other scientists, ranging from “amazing” to “very simplistic.”

Much of Dr. Nicolelis’s work is directed toward creating a full exoskeleton that a paralyzed person could operate with brain signals. Although this experiment is not directly related, he said, it helps refine the ability to read and translate brain signals, an important part of all prosthetic devices connected to the brain, and an area in which brain science is making great advances.

He also speculated about the future possibility of a biological computer, in which numerous brains are connected, and views this as a small step in that direction.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

EurekAlert! Science News

  • News Releases

Ultrasound devise shows promise for treating chronic pain

University of Utah engineers developed Diadem, which noninvasively stimulates deep brain regions, potentially disrupting the faulty signals that lead to chronic pain

University of Utah

Diadem

The Diadem device invented by University of Utah researchers to treat chronic pain and depression.

Credit: University of Utah

Pain is a necessary biological signal, but a variety of conditions can cause those signals to go awry. For people with chronic pain, the root is often faulty signals emerging deep within the brain, giving false alarms about a wound that has since healed, a limb that has since been amputated, or other intricate, hard-to-explain scenarios.

Patients with this kind of life-altering pain are constantly looking for new treatment options; now a new device from the University of Utah may represent a practical long-sought solution.

Researchers at the university’s John and Marcia Price College of Engineering and Spencer Fox Eccles School of Medicine have published promising findings about an experimental therapy that has given many participants relief after a single treatment session. They are now recruiting participants for a final round of trials.

At the core of this research is Diadem, a new biomedical device that uses ultrasound to noninvasively stimulate deep brain regions, potentially disrupting the faulty signals that lead to chronic pain.

The findings from a recent clinical trial are published in the journal  Pain .  This study constitutes a translation of two previous studies, published in  Nature Communications Engineering  and  IEEE Transactions on Biomedical Engineering , which describe the unique features and characteristics of the device and demonstrate its efficacy.

The study was conducted by  Jan Kubanek , a professor in Price’s  Department of Biomedical Engineering , and Thomas Riis, a postdoctoral researcher in his lab. They collaborated with  Akiko Okifuji , professor of Anesthesiology in the School of Medicine, as well as Daniel Feldman, graduate student in the departments of Biomedical Engineering and Psychiatry, and laboratory technician Adam Losser.

The randomized sham-controlled study recruited 20 participants with chronic pain, who each experienced two 40-minute sessions with Diadem, receiving either real or sham ultrasound stimulation. Patients described their pain a day and a week after their sessions, with 60% of the experimental group receiving real treatment reporting a clinical meaningful reduction in symptoms at both points.

“We were not expecting such strong and immediate effects from only one treatment,” Riis said.

“The rapid onset of the pain symptom improvements as well as their sustained nature are intriguing, and open doors for applying these noninvasive treatments to the many patients who are resistant to current treatments,” Kubanek added.

Diadem’s approach is based on neuromodulation, a therapeutic technique that seeks to directly regulate the activity of certain brain circuits. Other neuromodulation approaches are based on electric currents and magnetic fields, but those methods cannot selectively reach the brain structure investigated in the researchers’ recent trial: the anterior cingulate cortex.

After an initial functional MRI scan to map the target region, the researchers adjusted Diadem’s ultrasound emitters to correct for the way the waves deflect off the skull and other brain structures. This procedure was published in  Nature Communications Engineering .

The team is now preparing for a Phase 3 clinical trial, the final step before approval from the Food and Drug Administration to use Diadem as a treatment for the general public. 

“If you or your relatives suffer from chronic pain that does not respond to treatments, please reach out to us; we need to recruit many participants so that these treatments can be approved for the general public,” Kubanek said. “With your help, we think chronic pain can be effectively silenced. And with new pain treatment options, we can tackle the opioid crisis, too.”

The study titled “ Noninvasive targeted modulation of pain circuits with focused ultrasonic waves ” was published July 30 in the journal Pain. Funding came from the National Institutes of Health and the University of Utah.

10.1097/j.pain.0000000000003322

Method of Research

Randomized controlled/clinical trial

Subject of Research

Article title.

Noninvasive targeted modulation of pain circuits with focused ultrasonic waves

Article Publication Date

30-Jul-2024

COI Statement

J. Kubanek is an inventor on a patent related to the device function. The other authors have no conflict of interest to declare.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Original Source

Advertisement

Brain might not stand in the way of free will

By Anil Ananthaswamy

6 August 2012

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Our decision-making process remains hazy

(Image: Jannes Glas/Getty)

Editorial: “ Can we live without free will? “

Advocates of free will can rest easy, for now. A 30-year-old classic experiment that is often used to argue against free will might have been misinterpreted.

In the early 1980s, Benjamin Libet at the University of California in San Francisco, used electroencephalography (EEG) to record the brain activity of volunteers who had been told to make a spontaneous movement. With the help of a precise timer that the volunteers were asked to read at the moment they became aware of the urge to act, Libet found there was a 200 millisecond delay, on average, between this urge and the movement itself.

But the EEG recordings also revealed a signal that appeared in the brain even earlier – 550 milliseconds, on average – before the action. Called the readiness potential, this has been interpreted as a blow to free will, as it suggests that the brain prepares to act well before we are conscious of the urge to move.

This conclusion assumes that the readiness potential is the signature of the brain planning and preparing to move. “Even people who have been critical of Libet’s work, by and large, haven’t challenged that assumption,” says Aaron Schurger of the National Institute of Health and Medical Research in Saclay, France.

One attempt to do so came in 2009. Judy Trevena and Jeff Miller of the University of Otago in Dunedin, New Zealand, asked volunteers to decide, after hearing a tone, whether or not to tap on a keyboard. The readiness potential was present regardless of their decision , suggesting that it did not represent the brain preparing to move. Exactly what it did mean, though, still wasn’t clear.

Crossing a threshold

Now, Schurger and colleagues have an explanation. They began by posing a question: how does the brain decide to make a spontaneous movement? They looked to other decision-making scenarios for clues. Previous studies have shown that when we have to make a decision based on visual input, for example, assemblies of neurons start accumulating visual evidence in favour of the various possible outcomes. A decision is triggered when the evidence favouring one particular outcome becomes strong enough to tip its associated assembly of neurons across a threshold.

Schurger’s team hypothesised that something similar happens in the brain during the Libet experiment. Volunteers, however, are specifically asked to ignore any external information before they make a spontaneous movement, so the trigger to act must be internal.

The random fluctuations of neural activity in the brain. Schurger’s team reasoned that movement is triggered when this neural noise accumulates and crosses a threshold.

To probe the idea, the team first built a computer model of such a neural accumulator. In the model, each time the neural noise crossed a threshold it signified a decision to move. They found that when they ran the model numerous times and looked at the pattern of the neural noise that led up to the decision it looked like a readiness potential.

Next, the team repeated Libet’s experiment, but this time if, while waiting to act spontaneously, the volunteers heard a click they had to act immediately. The researchers predicted that the fastest response to the click would be seen in those in whom the accumulation of neural noise had neared the threshold – something that would show up in their EEG as a readiness potential.

This is exactly what the team found. In those with slower responses to the click, the readiness potential was absent in the EEG recordings.

Spontaneous brain activity

“Libet argued that our brain has already decided to move well before we have a conscious intention to move,” says Schurger. “We argue that what looks like a pre-conscious decision process may not in fact reflect a decision at all. It only looks that way because of the nature of spontaneous brain activity.”

So what does this say about free will? “If we are correct, then the Libet experiment does not count as evidence against the possibility of conscious will,” says Schurger.

Cognitive neuroscientist Anil Seth of the University of Sussex in Brighton, UK, is impressed by the work, but also circumspect about what it says about free will. “It’s a more satisfying mechanistic explanation of the readiness potential. But it doesn’t bounce conscious free will suddenly back into the picture,” he says. “Showing that one aspect of the Libet experiment can be open to interpretation does not mean that all arguments against conscious free will need to be ejected.”

According to Seth, when the volunteers in Libet’s experiment said they felt an urge to act, that urge is an experience, similar to an experience of smell or taste. The new model is “opening the door towards a richer understanding of the neural basis of the conscious experience of volition”, he says.

Journal reference: Proceedings of the National Academy of Sciences , DOI: 10.1073.pnas.1210467109

  • psychology /

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

More from New Scientist

Explore the latest news, articles and features

Is digital technology really swaying voters and undermining democracy?

Is digital technology really swaying voters and undermining democracy?

Subscriber-only

brain signals experiment

We may finally know how the placebo effect relieves pain

Guy sleeping on the couch in what looks like an uncomfortable position; Shutterstock ID 241260808; purchase_order: -; job: -; client: -; other: -

How to use psychology to hack your mind and fall in love with exercise

TOPSHOT - Fitness coach Gabrielle Friscira gives a lesson by videoconference in Saint-Remy-lHonore, west of Paris, on April 15, 2020, on the 30th day of a strict lockdown in France aimed at curbing the spread of the COVID-19 pandemic, caused by the novel coronavirus. (Photo by FRANCK FIFE / AFP) (Photo by FRANCK FIFE/AFP via Getty Images)

If your gym instructor is an iPad, what is lost – and gained?

Popular articles.

Trending New Scientist articles

Backyard Brains Logo

Neuroscience for Everyone!

  • Experiments

Experiment: The Consciousness Detector - EEG, Oddball Task, and P300

Now that you've seen the rhythmic activity of the brain, you can look at coordinated surges in brain activity associated with specific sensory events. Are you conscious? Let's find out! Note: For now, this advanced experiment requires that you have Matlab downloaded on your computer-it will be used for data analysis.

What will you learn?

With this experiment, you will learn even more about communication within the human brain, and look at the brain's response when it senses an unexpected stimulus. You will observe this by measuring the p300 signal, which is a more cognitively-based signal than the visual cortex alpha rhythm.

Prerequisite Labs

  • EEG - You should have an intermediate understanding of the Arduino platform and how to use the Heart & Brain SpikerShield to record your alpha waves before moving on to this more challenging experiment of recording event related potentials.

Heart & Brain SpikerShield Bundle

What exactly happens in our brain, "behind the scenes", when we're thinking? In our EEG-Record from the Human Brain experiment, we explored rhythmic activity in the occipital lobe of the brain when we don't see any light (eyes closed). But this reflects a sort of "state" that part of the brain is entering-whereas thoughts are often rapid and momentary. If we want to begin looking at cognition, there are some non-rhythmic electrical phenomena we can pick up in the brain.

Let's talk a little more about the electrical signals we can see through the skull. The P300 signal is an event related potential (ERP), meaning that the signal is seen on an EEG as a rapid single potential change as a response to a sensory, cognitive, or motor event. The signal's peak comes an average of 300 milliseconds after, or "post", the stimulus, so we call it the P300. This is opposed to rhythmic waves (alpha, beta, theta, etc) which reflect a longer term state a brain is experiencing.

brain signals experiment

The P300 signal is thought to come from the parietal lobe, which is where we will place the electrodes. This part of your brain has an important role in attention to your surroundings. People with damage on the right parietal hemisphere can often have difficulty acknowledging the existence of the left side of the world, a phenomenon known as "hemi-neglect" (Note: Damage to the left parietal hemisphere does not cause right spatial neglect, something rather still a mystery).

When you see or hear something odd, something that sticks out to you, neurons in the parietal lobe have a surge in activation as your neurons begin rapidly spiking in this area as your brain works to react to and understand this new stimulus. The P300 signal doesn't come directly from sensation-seeing a light come on or hearing a sound-but from your brain's assessment of these "unexpected" stimuli. We'll see in this experiment that not all sounds initiate this signal.

brain signals experiment

The experiment you'll be doing is called an "oddball task", where you will hear a repeated, regular presentation of tones of a particular frequency, but 10 percent of the time it will instead be a slightly different - a higher pitched tone for us. An average of 300 milliseconds after this novel, or "oddball" event, our EEG will manifest a large combined potential across the parietal lobe.

brain signals experiment

One of the most fascinating applications of the auditory p300 is to examine the brain activity of comatose patients. Some evidence shows that if you perform this experiment on an individual in a coma and see a variant of the p300 signal in their EEG, it is a strong indicator that they might be able to be brought out of the coma. Hence why we name this experiment the "Consciousness Detector."

brain signals experiment

The p300 signal is difficult to see as you are performing the experiment, it only becomes clear after doing data analysis on many trials. It is most visible after you have taken all the time periods following when the oddball tone is played, averaging those recordings, and then comparing them to the same time period around the non-oddball tones. The drawing below illustrates this concept. The averaging of multiple trials highlights the signal for us and allows us to directly compare when we hypothesize a p300 should appear versus when we hypothesize it should not.

brain signals experiment

Before you begin, make sure you have the Backyard Brains Spike Recorder and Arduino Programs installed on your computer. The Arduino "Sketch" is what you install on your Arduino circuit board using the Arduino laptop software (your board comes preinstalled if you bought the Arduino from us), and Backyard Brains Spike Recorder program allows you to visualize and save the data on your computer when doing experiments. You should be familiar with this from your experience finding alpha waves. For now, analyzing the data your collect requires Matlab , which is typically available on university engineering library computers. Spike Recorder Computer Software EEG Arduino code Buzzer Arduino code Matlab Scripts

Tutorial Video of Experiment

This experiment uses the Heart and Brain SpikerShield that has a gain of approximately 880x with a bandpass filter of 1-129 Hz.

brain signals experiment

In the Consciousness Detector - EEG, Oddball Task, and P300, we will examine the event-related potential that results from the arrival of an oddball stimulus among a series of standard stimuli.

Device Setup

brain signals experiment

Here is a circuit diagram showing the connections to be made:

brain signals experiment

Electrode Setup and Testing

brain signals experiment

  • The subject will mark each oddball tone that they hear on the piece of paper (and keep track of the total) until they've tallied fifty. At this point, we end the recording.
  • Each gray line in this figure shows one second of the EEG recording surrounding each standard tone onset in the experiment. The average of each of these tone responses is taken and plotted in red.
  • Each gray line in this figure shows one second of the EEG recording surrounding each oddball tone onset in the experiment. The average of each of these tone responses is taken and plotted in green. Since the P300 is only between 10 to 20 mV, it can easily be lost in the EEG "noise". For this reason the signal is only visible when average around the flash onset after the EEG "noise" is averaged to zero.

brain signals experiment

  • We must check that our results are scientifically significant by applying statistics principles to our data. To check whether our results may have occurred by chance, we choose as many points as there are flashes by random and average one second of data surrounding these randomly chosen points to be plotted in gray. This is done one hundred times and another average is taken and plotted in blue. This is the Monte Carlo average
  • All averages are plotted together. A 95% confidence interval based around the Monte Carlo average mean value is plotted to show significant data - data outside of the confidence interval. The P300 waveform has a label for the latency of the largest positive potential occurring between 250 ms and 600 ms after the oddball tone, as the P300 signal is defined.
  • You can download the data file used in the video above for your comparisons.

brain signals experiment

Scientists discover how a gut insulin antagonist controls fat loss in C. elegans by modulating brain signals

  • Download PDF Copy

Tarun Sai Lomte

This study highlights a novel mechanism of gut-to-brain communication crucial for lipid metabolism.

In a recent study published in Nature Communications , researchers identify an endogenous insulin antagonist that modulates fat loss in the roundworm Caenorhabditis elegans .

How is information transmitted between the nervous system and intestines?

The central nervous system (CNS) plays a significant role in systemic lipid homeostasis. Additionally, endocrine hormones signal from peripheral organs to relay fasted and fed state information throughout the body. The intestines transmit internal state information to the brain and other organs through gut hormones.

In the roundworm C. elegans , the utilization of lipids, which are primarily stored and metabolized in the intestine, is determined mainly by sensory neurons and their circuits. Previously, the current study's researchers identified specific activities by sensory neurons and their role in lipid storage. Whereas URX and BAG neurons can detect and respond to oxygen levels in their surrounding environment, ADL and ADF neurons sense population density and bacterial food, respectively.

These researchers also identified FMRFamide-like neuropeptide 7 (FLP-7), a brain-to-gut neuroendocrine peptide involved in relaying sensory information from the nervous system to the intestine. The secretion of FLP-7 is mediated by both URX and ADL neurons, which is subsequently detected by the neuropeptide receptor 22 (NPR-22).

Thus, the FLP-7/NPR-22 axis represents a common brain-to-gut pathway for the sensory nervous system relaying information to the intestine. However, the mechanisms by which peripheral organs relay information to the nervous system in C. elegans remain unclear, despite evidence suggesting the existence of these signals.

Study findings

The present study investigates the molecular features underlying gut-to-brain information relay in C. elegans . An intestine-specific ribonucleic acid interference (RNAi) screen of genes encoding small peptides was performed to identify changes in FLP-7 secretion from ASI neurons (FLP-7 ASI ) in C. elegans , in which insulin-like peptide 7 ( ins-7 ) was identified as the most potent hit. In fact, FLP-7 ASI secretion increased nearly two-fold in the absence of ins-7 .

Related Stories

  • Breakthrough miniaturized brain-machine interface enables brain-to-text communication
  • SARS-CoV-2 evolves differently in the brain, revealing critical insights into viral tropism
  • Researchers identify key cellular interactions driving Alzheimer's and aging

The researchers also generated transgenic rescue lines in which ins-7 expression was restored in ins-7 null mutant cells following treatment with INT1-specific promoters. Furthermore, ins-7 expression in INT1 cells alone or restoring it more broadly throughout the intestine completely rescued FLP-7 ASI secretion.

INT1-specific ins-7 RNAi and overexpression also increased and suppressed FLP-7 ASI secretion, respectively. Additionally, ins-7 null mutants with increased FLP-7 ASI secretion exhibited significantly reduced intestinal fat stores, which was dependent on the flp-7 gene.

Selective inactivation of flp-7 in ASI neurons revealed that the fat phenotype in ins-7 mutants required flp-7 in ASI neurons. The reduction in fat stores in ins-7 nulls was also dependent upon the induction of adipose triglyceride lipase 1 ( atgl-1 ) gene in the presence of flp-7.

The researchers also investigated the relationship between ins-7 and daf-2 , the only insulin receptor in C. elegans . To this end, daf-2 mutants reduced FLP-7 ASI secretion, unlike ins-7 mutants. ASI neuron-specific daf-2 inhibition phenocopied the global daf-2 mutation, whereas ASI-specific daf-2 rescue restored the secretion of FLP-7 to wild-type levels.

The localization of DAF-16 in ASI neurons was determined by examining its cytoplasmic-to-nuclear (C:N) ratio, which is a sensitive and accurate hallmark of DAF-2 function. In well-fed wild-type animals, DAF-16 was present in the cytoplasm with a C:N ratio of 1.2; however, in daf-2 mutants, DAF-16 was translocated to the nucleus with a C:N ratio of 0.5. In ins-7 mutants and ins-7 -overexpressed worms, the C:N ratio was similar to that of wild-type animals.

These effects were subsequently assessed after a three-hour fasting state, which depletes about 80% of intestinal fat stores. In the fasted state, DAF-16 localization did not shift between cytoplasm and nucleus in wild-type animals, nor ins-7 or daf-2 mutants. Comparatively, in worms with ins-7 overexpression, DAF-16 translocated to the nucleus with a C:N ratio of 0.8.

In the fasted state, DAF-2 and INS-7 colocalized on the ASI neuronal surface in wild-type worms, thus indicating that INS-7 may differentially regulate FLP-7 ASI in fasted and fed states.

FLP-7 secretion dynamics were subsequently determined in the absence and presence of ins-7 . In food-deprived wild-type animals, increased FLP-7 secretion was not evident until three hours.

Feeding after three hours restored FLP-7 secretion to baseline levels. This feeding state-dependent FLP-7 regulation was abrogated in ins-7 null mutants, as FLP-7 secretion was chronically high and independent of fed or fasted states.

The dynamics of INS-7 secretion in food-deprived wild-type animals were also assessed. To this end, an increase in INS-7 secretion was observed within thirty minutes of food deprivation and restored to baseline levels upon re-feeding.

Conclusions

INS-7 is secreted from specialized enteroendocrine INT1 cells of C. elegans and functions as an antagonist of the DAF-2 receptor in ASI neurons to inhibit FLP-7 secretion. FLP-7 ASI release promotes fat loss; therefore, the gut-to-brain peptide INS-7 limits this signal without sensing food in the intestine.

The current study reveals a mechanism of gut-to-brain homeostatic communication in which lipid metabolism balances internal metabolic states and external sensory cues.

  • Liu, C. C., Khan, A., Seban, N., et al . (2024). A homeostatic gut-to-brain insulin antagonist restrains neuronally stimulated fat loss. Nature Communications . doi:10.1038/s41467-024-51077-3

Posted in: Molecular & Structural Biology | Medical Science News | Medical Research News

Tags: Adipose , Brain , Caenorhabditis elegans , Central Nervous System , Cytoplasm , Endocrine , Fasting , Food , Gene , Genes , Insulin , Lipase , Lipids , Metabolism , Mutation , Nervous System , Neuron , Neurons , Oxygen , Peptides , Phenotype , Receptor , Ribonucleic Acid , RNAi , Roundworm , Transgenic , Triglyceride

Tarun Sai Lomte

Tarun is a writer based in Hyderabad, India. He has a Master’s degree in Biotechnology from the University of Hyderabad and is enthusiastic about scientific research. He enjoys reading research papers and literature reviews and is passionate about writing.

Please use one of the following formats to cite this article in your essay, paper or report:

Sai Lomte, Tarun. (2024, August 27). Scientists discover how a gut insulin antagonist controls fat loss in C. elegans by modulating brain signals. News-Medical. Retrieved on September 02, 2024 from https://www.news-medical.net/news/20240827/Scientists-discover-how-a-gut-insulin-antagonist-controls-fat-loss-in-C-elegans-by-modulating-brain-signals.aspx.

Sai Lomte, Tarun. "Scientists discover how a gut insulin antagonist controls fat loss in C. elegans by modulating brain signals". News-Medical . 02 September 2024. <https://www.news-medical.net/news/20240827/Scientists-discover-how-a-gut-insulin-antagonist-controls-fat-loss-in-C-elegans-by-modulating-brain-signals.aspx>.

Sai Lomte, Tarun. "Scientists discover how a gut insulin antagonist controls fat loss in C. elegans by modulating brain signals". News-Medical. https://www.news-medical.net/news/20240827/Scientists-discover-how-a-gut-insulin-antagonist-controls-fat-loss-in-C-elegans-by-modulating-brain-signals.aspx. (accessed September 02, 2024).

Sai Lomte, Tarun. 2024. Scientists discover how a gut insulin antagonist controls fat loss in C. elegans by modulating brain signals . News-Medical, viewed 02 September 2024, https://www.news-medical.net/news/20240827/Scientists-discover-how-a-gut-insulin-antagonist-controls-fat-loss-in-C-elegans-by-modulating-brain-signals.aspx.

Suggested Reading

Scientists map food microbes and their gut microbiome impact

Cancel reply to comment

  • Trending Stories
  • Latest Interviews
  • Top Health Articles

New report reveals the truth behind plant-based protein alternatives

Global and Local Efforts to Take Action Against Hepatitis

Lindsey Hiebert and James Amugsi

In this interview, we explore global and local efforts to combat viral hepatitis with Lindsey Hiebert, Deputy Director of the Coalition for Global Hepatitis Elimination (CGHE), and James Amugsi, a Mandela Washington Fellow and Physician Assistant at Sandema Hospital in Ghana. Together, they provide valuable insights into the challenges, successes, and the importance of partnerships in the fight against hepatitis.

Global and Local Efforts to Take Action Against Hepatitis

Addressing Important Cardiac Biology Questions with Shotgun Top-Down Proteomics

In this interview conducted at Pittcon 2024, we spoke to Professor John Yates about capturing cardiomyocyte cell-to-cell heterogeneity via shotgun top-down proteomics.

Addressing Important Cardiac Biology Questions with Shotgun Top-Down Proteomics

A Discussion with Hologic’s Tim Simpson on the Future of Cervical Cancer Screening

Tim Simpson

Hologic’s Tim Simpson Discusses the Future of Cervical Cancer Screening.

A Discussion with Hologic’s Tim Simpson on the Future of Cervical Cancer Screening

Latest News

FINEARTS-HF trial shows finerenone benefits for heart failure with preserved ejection fraction

Newsletters you may be interested in

Cardiology

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from News-Medical.net.

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI .
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy .
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

Azthena may occasionally provide inaccurate responses. Read the full terms .

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions .

Provide Feedback

brain signals experiment

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Medlineplus

Trusted Health Information from the National Institutes of Health

4 discoveries beyond the brain

Nih research explores early signs of brain disorders.

Scientists developed a simple skin biopsy that could identify people with certain disorders, including Parkinson’s disease.

Scientists developed a simple skin biopsy that could identify people with certain disorders, including Parkinson’s disease.

Neurodegenerative diseases—such as Alzheimer’s disease , Parkinson’s disease (PD), Lewy body dementia (LBD), and amyotrophic lateral sclerosis (also known as ALS or Lou Gehrig’s disease)—affect millions of people around the world. These conditions progressively damage nerve cells in the brain and nervous system. Over time, this can lead to problems with movement, thinking, memory, and more.

A century ago, many neurological conditions could only be diagnosed through an autopsy (after the person had died). Fortunately, today’s doctors and scientists have more ways to examine the brains and nervous systems of living patients. But these disorders can still be challenging to detect. Current diagnostic tools often identify these diseases after they have already started to damage the brain.

The National Institute of Neurological Disorders and Stroke (NINDS) leads research to help better understand, diagnose, and treat these conditions. Here are four recent discoveries that may help doctors and scientists spot early signs of damage, develop and test new treatments, and figure out who might benefit most from specific therapies.

Heart imaging reveals early signs

NINDS researchers at the NIH Clinical Center used a new method to identify early signs of PD and LBD . This team used a special type of PET scan to look at the hearts of people at high risk for these diseases. They found that people who later developed PD or LBD had levels of a chemical called norepinephrine in their hearts that were much lower than is typical, years before they showed any symptoms.

These findings suggest that PD or LBD might start in the part of the nervous system that controls automatic body functions (like heart rate and blood pressure) even before they affect the brain. Being able to spot these early signs could change how doctors understand and treat these diseases.

Blood tests for mitochondrial damage

NIH-funded researchers are developing a blood test that measures the level of damage to the DNA inside mitochondria —the cell’s energy producers. Previous research suggests that mitochondrial damage may be linked to some cases of PD, so focusing on this damage may help identify and diagnose PD early on. In this study, blood samples from people with PD showed more cell damage compared to samples from healthy volunteers. Some people with PD also had more damage than others.

Researchers still need to show that the test works in larger and more diverse populations. If successful, the test could help identify treatments that target mitochondria, learn which patients are most likely to respond to certain treatments, and determine whether a treatment is working.

Artificial intelligence analyzes sleep breathing patterns

In another innovative study, NINDS-funded researchers used an artificial intelligence (AI) program to identify PD by analyzing breathing patterns during sleep . The researchers tested the AI program using two types of sleep data: breathing patterns and brain activity.

By looking at 12 nights of sleep test data from people with and without PD, the program was able to identify those with PD with a high degree of accuracy. It also detected small changes in PD symptoms over a longer period of time more accurately than traditional clinical assessments.

This program could help both doctors and researchers. By using this tool, doctors may find PD earlier, and researchers may develop new treatments easier and faster. However researchers need to test it with more people from diverse backgrounds first. They also think it could be especially helpful for people who live in remote areas or have trouble leaving home.

Top: A participant wearing a chest belt during a sleep study to measure breathing patterns. Bottom: A wireless sensor uses radio signals to monitor breathing patterns without physical contact during sleep.

Two photos of sleeping research participants. One is wearing a chest belt. The other has a wireless monitor near their bed.

Image 1: A participant wearing a chest belt during a sleep study to measure breathing patterns. Image 2: A wireless sensor uses radio signals to monitor breathing patterns without physical contact during sleep.

Skin biopsy for neurodegenerative diseases

NIH-funded researchers developed a simple skin biopsy that may identify people with PD, LBD, and related disorders. This quick, nearly painless test looks for phosphorylated alpha-synuclein, a specific protein that’s associated with certain neurodegenerative diseases.  

In this study, researchers looked at small skin samples from people diagnosed with one of these conditions and people without any history of neurodegenerative diseases. The test found this protein in more than 90% of people with a diagnosis compared to only 3% of individuals without one. This could lead to faster, more accurate diagnoses and earlier treatments for patients.

Sources:  NIH Research Matters , National Institute of Neurological Disorders and Stroke , MedlinePlus

August 29, 2024

You May Also Like

Dr. Walter J. Koroshetz, Director of the National Institute on Neurological Disorders and Stroke.

Meet Walter J. Koroshetz, M.D., Director of the National Institute of Neurological Disorders and Stroke

Dr. Walter J. Koroshetz has been fascinated by the brain and how it works from a young age. This curiosity,...

Lewy body dementia: What you need to know

Lewy body dementia (LBD) is one of the most common types of dementia—but also one of the least well-known. The...

Trusted health information delivered to your inbox

Enter your email below

IMAGES

  1. 7: Schematic diagram of an EEG recording experiment. Source: [101

    brain signals experiment

  2. Representation of the most common brain signals used in BMI

    brain signals experiment

  3. Speech Synthesized from Brain Signals

    brain signals experiment

  4. 3 Methods for brain recording at different scales. Several types of

    brain signals experiment

  5. Scientists decode brain signals nearly at speed of perception

    brain signals experiment

  6. Scientists Create Speech From Brain Signals

    brain signals experiment

VIDEO

  1. How Neuralink Works 🧠

  2. Fundamentals of the Virtual Brain

  3. Put the brakes on using your brain power

  4. Control robotic arm by brain signals, EMOTIV Insight + ARDUINO

  5. How can we study signals from the human brain?

  6. Elon musk's

COMMENTS

  1. Decoding Brain Signals: Study Shines Light on Neural Pathways

    The team studied C. elegans, a transparent worm with 302 neurons, making it an ideal model for mapping brain signal flow. Through pioneering optogenetics, they visualized real-time signaling, uncovering unexpected "wireless signals" using neuropeptides. ... "For this experiment, we went one neuron at a time through the entire brain ...

  2. Researcher controls colleague's motions in 1st human brain-to-brain

    The cycle of the experiment. Brain signals from the "Sender" are recorded. When the computer detects imagined hand movements, a "fire" command is transmitted over the Internet to the TMS machine, which causes an upward movement of the right hand of the "Receiver." This usually results in the "fire" key being hit.

  3. Scientists Have Found a Way to Convert Human Brain Signals Directly

    By David Nield. (akesak/Stock) In the first experiment of its kind, scientists have been able to translate brain signals directly into intelligible speech. It may sound like wild science fiction at first, but this feat could actually help some people with speech issues. And yes, we could also get some futuristic computer interfaces out of this.

  4. A First-of-Its-Kind Signal Was Detected in The Human Brain

    Back in 2020, researchers from institutes in Germany and Greece reported a mechanism in the brain's outer cortical cells that produces a novel 'graded' signal all on its own, one that could provide individual neurons with another way to carry out their logical functions. ... While the team had carried out similar experiments on rats, the kinds ...

  5. Study reveals a universal pattern of brain wave frequencies

    The six anatomical layers of the mammalian brain cortex show distinct patterns of electrical activity which are consistent throughout the entire cortex and across several animal species, ... "The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does," Miller says ...

  6. From Thoughts To Words: How AI Deciphers Neural Signals To Help A Man

    This method requires recording brain signals corresponding to each word multiple times to identify the average relationship between neural activity and specific words. While this strategy works well for small vocabularies, as demonstrated in a 2021 study with a 50-word vocabulary, it becomes impractical for larger ones. Imagine asking the brain ...

  7. From thoughts to words: How AI deciphers neural signals to help a man

    An array of 64 electrodes that embed into brain tissue records neural signals. UC Davis Health Decoding brain signals. The next challenge is relating the complex brain signals to the words the ...

  8. UW study shows direct brain interface between humans

    At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE. "The new study brings our brain-to-brain interfacing paradigm from an ...

  9. Brain clocks capture diversity and disparities in aging and dementia

    Utilizing a deep learning architecture of the brain's high-order interactions 1 across fMRI and EEG signals, combined with globally accessible and affordable data, our study paves the way for ...

  10. One hundred years of EEG for brain and behaviour research

    EEG is a non-invasive neuroimaging technique used to record the electrical activity of the brain via electrodes placed on the scalp. The recorded signal — the electroencephalogram (which shares ...

  11. Prioritising the unexpected: new brain mechanism uncovered

    The researchers discovered how two brain areas, the neocortex and the thalamus, work together to detect discrepancies between what animals expect from their environment and actual events. The brain areas selectively boost, or prioritise, any unexpected sensory information.

  12. Two Human Brains Linked, Play '20 Questions'

    In the latest advance in brain-to-brain communication, I-LABS researchers demonstrate how two brains collaboratively problem solve. University of Washington researchers recently used a direct brain-to-brain connection to enable pairs of participants to play a question-and-answer game by transmitting signals from one brain to the other over the Internet.

  13. Neural populations in the language network differ in the size of their

    First, the envelope of the high-gamma signal was averaged across word/non-word positions (8 positions in the experiment used in Dataset 1 and 12 positions in the experiment used in Dataset 2) to ...

  14. Applied Sciences

    Multimodal brain signal analysis has shown great potential in decoding complex cognitive processes, particularly in the challenging task of inner speech recognition. This paper introduces an innovative I nner Speech Recognition via Cross-Perception (ISRCP) approach that significantly enhances accuracy by fusing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data.

  15. Prioritizing the unexpected: New brain mechanism uncovered

    Researchers have discovered how two brain areas, neocortex and thalamus, work together to detect discrepancies between what animals expect from their environment and actual events. These ...

  16. UW team links two human brains for question-and-answer experiment

    The study builds on the UW team's initial experiment in 2013, when it was the first to demonstrate a direct brain-to-brain connection between humans. Other scientists have connected the brains of rats and monkeys, and transmitted brain signals from a human to a rat, using electrodes inserted into animals' brains.

  17. Neural Decoding of the Speech Envelope: Effects of Intelligibility and

    This speech-brain coupling can be decoded using non-invasive brain imaging techniques, including electroencephalography (EEG). Neural decoding may provide clinical use as an objective measure of stimulus encoding by the brain—for example during cochlear implant listening, wherein the speech signal is severely spectrally degraded.

  18. How Your Brain Works : Neuroscience Experiments for Everyone

    Armed with some DIY electrodes, readers will get to see what brain activity really looks like through simple neuroscience experiments. Written by two neuroscience researchers who invented open-source techniques to record signals from neurons, muscles, hearts, eyes, and brains, How Your Brain Works includes more than forty-five experiments to ...

  19. In a First, Experiment Links Brains of Two Rats (Published 2013)

    Feb. 28, 2013. In an experiment that sounds straight out of a science fiction movie, a Duke neuroscientist has connected the brains of two rats in such a way that when one moves to press a lever ...

  20. Experiment: EEG-Record from the Human Brain

    The relationship between synchrony and data processing in the brain can be hard to understand, and relates to information theory (you can dive into our hero Claude Shannon's work to learn more), but, in general, the more synchronous the neurons in your brain are, the less data processing is occurring. This leads to the paradox that the stronger the electrical signal we can record on the ...

  21. Understanding how the brain processes perception into action

    Crucially, this delay allowed us to temporally separate brain activity linked to the stimulus from that linked to the choice, and track how movement-related neural signals unfolded over time from ...

  22. Experiment: Control Machines with your Brain

    This is continuation of our Neuroprosthetics Experiment; only now, you will use an EMG Signal from a muscle of choice, paired with an Arduino Microcontroller, to control a bank of LED lights with this Brain-Arduino interface!. Prerequisite Labs. Muscle SpikerBox - You should become familiar with what an EMG signal is. Equipment.

  23. Ultrasound devise shows promise for treating

    Pain is a necessary biological signal, but a variety of conditions can cause those signals to go awry. For people with chronic pain, the root is often faulty signals emerging deep within the brain ...

  24. Neuroscience of free will

    A pioneering experiment in this field was conducted by Benjamin Libet in the 1980s, in which he asked each subject to choose a random moment to flick their wrist while he measured the associated activity in their brain (in particular, the build-up of electrical signal called the Bereitschaftspotential (BP), which was discovered by Kornhuber ...

  25. Brain might not stand in the way of free will

    A classic experiment that suggests the brain is aware of our urge to act spontaneously before we are might have been misinterpreted ... But the EEG recordings also revealed a signal that appeared ...

  26. Experiment: The Consciousness Detector

    This experiment uses the Heart and Brain SpikerShield that has a gain of approximately 880x with a bandpass filter of 1-129 Hz. In the Consciousness Detector - EEG, Oddball Task, and P300, we will examine the event-related potential that results from the arrival of an oddball stimulus among a series of standard stimuli. Device Setup.

  27. Scientists discover how a gut insulin antagonist controls fat loss in C

    Researchers have identified an endogenous insulin antagonist, INS-7, in Caenorhabditis elegans that modulates fat loss by inhibiting a gut-to-brain signaling pathway involving the neuropeptide FLP-7.

  28. 4 discoveries beyond the brain

    Neurodegenerative diseases—such as Alzheimer's disease, Parkinson's disease (PD), Lewy body dementia (LBD), and amyotrophic lateral sclerosis (also known as ALS or Lou Gehrig's disease)—affect millions of people around the world. These conditions progressively damage nerve cells in the brain and nervous system. Over time, this can lead to problems with movement, thinking, memory, and ...

  29. Bayesian approaches to brain function

    Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. [1] [2] This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles.

  30. Violations of physical and psychological expectations in the human

    After seeing one solid object apparently passing through another, or a person taking the long route to a destination when a shortcut was available, human adults classify those events as surprising. When tested on these events in violation-of-expectation (VOE) experiments, infants look longer at the same outcomes, relative to similar but expected outcomes. What cognitive processes underlie ...