Our brains use various reference frames—also known as coordinate systems—to represent the motion of objects in a scene.
Some coordinate systems are more useful than others for representing information. To represent a location on Earth, for example, we might use an Earth-centered coordinate system such as latitude and longitude. In such an Earth-centered coordinate system, a location—such as your home—is constant over time. But you could also represent where you live as a location relative to the sun using a sun-centered coordinate system. Such a system would clearly not be useful for people trying to find where you live, as your address in sun-centered coordinates would change continuously as the Earth rotates relative to the sun.
The human brain faces this same problem of representing information with appropriate coordinate systems and transferring between coordinate systems to guide your actions. This is partly because sensory information is encoded in different reference frames: visual information is initially encoded relative to the eye with eye-centered coordinates, auditory information is initially encoded relative to the head with head-centered coordinates, and so on. An interesting set of computations must occur in the brain in order for these sensory signals to be combined to allow a person to perceive an entire scene.
But how do neurons represent objects in different reference frames while you move through an environment?
In a paper published in the journal Nature Neuroscience, researchers from the University of Rochester, including Greg DeAngelis, a professor of brain and cognitive sciences, examined how neurons in the brain represent the motion of an object while the observer is also moving.
Specifically, the researchers studied how observers judge an object’s motion relative to the observer’s head or relative to the world.
Their findings—that neurons in a specific brain region are more flexible in switching between reference frames—offer important information about the inner workings of the brain and could potentially be used in neural prosthetics and therapies to treat brain disorders.
Are neurons fixed or flexible?
Imagine you’re playing soccer. If you’re running and want to head the ball, you would need to compute the trajectory of the ball’s motion relative to your head so you can make contact between your head and the ball. A head-centered coordinate system would therefore be useful. Alternatively, if you are running and watching your teammate kick the ball toward the goal, you would need to compute the trajectory of the ball relative to the goal to determine whether or not your teammate scored. This would require a world-centered coordinate system since the goal is fixed relative to the world.
“Depending on the task being performed, the brain needs to represent object motion in different coordinate systems to be successful,” DeAngelis says. “The big question is: how does the brain do this?”
The researchers wanted to determine if the brain has to switch between different neurons that each have a different fixed reference frame—for example, switching between head-centered neurons and world-centered neurons—or if the neurons are flexible and update their reference frames according to the instantaneous demands of the task of representing object motion.
The researchers trained subjects to judge object motion in either head-centered or world-centered coordinates and to switch between them from trial to trial based on a cue.
The researchers recorded signals from neurons in two different areas of the brain and found that neurons in the ventral intraparietal (VIP) area of the brain have a remarkable property: their responses to object motion change depending on the task.
That is, the neurons do not have fixed reference frames, but instead flexibly adapt to the demands of the task and change their reference frames accordingly.
Neurons in VIP will represent object motion in head-centered coordinates when the subjects are required to report object motion relative to their head. They represent object motion in world-centered coordinates when the subject was required to report object motion relative to the world.
Because the neurons have such flexible responses, this means the brain may greatly simplify the process of passing along the information it needs to guide actions.
“This is the first study to show that neurons can flexibly represent spatial information, such as object motion, in different coordinate systems based on the instructions given to the subject,” DeAngelis says. “This means the brain can decode—or ‘read out’—information from this single population of neurons and be able to have the information it needs for either task situation.”
The VIP area is located in the parietal lobe of the brain and receives inputs from visual, auditory, and vestibular (inner ear) senses. This is the first study to test for flexible reference frames, so the VIP area is the only area known to have this property. The researchers suspect, however, that neurons in other areas of the brain may have this property as well.
Applications for neural prosthetics and brain disorders
The research offers important information about the inner workings of the brain and potentially could be used for applications such as neural prosthetics, in which brain activity is used to control artificial limbs or vehicles.
“To make an effective neural prosthetic, you want to collect signals from the brain areas that would be most useful and flexible for performing basic tasks,” DeAngelis says. “If those tasks involve intercepting moving objects, for example, then tapping into signals from VIP might be a way to make a prosthetic work efficiently for a variety of tasks that would involve judging motion relative to the head or the world.”
Although this research is not currently connected to a specific brain disorder, researchers have previously found that humans’ ability to take in sensory information and infer which events in the world caused that sensory input—an ability known as causal inference—is impaired in disorders such as autism and schizophrenia.
“In ongoing and future work, we are studying the neural mechanisms of this causal inference process in more detail, using related tasks that involve interactions between object motion and self-motion,” DeAngelis says.