University of Rochester

Researchers Discover Second Depth-Perception Mechanism in Brain

March 17, 2008

It's common knowledge that humans and other animals are able to visually judge depth because we have two eyes and the brain compares the images from each. But we can also judge depth with only one eye, and scientists have been searching for how the brain accomplishes that feat.

Now, a team led by a scientist at the University of Rochester believes it has discovered the answer in a small part of the brain that processes both the images from a single eye and also the motion of our bodies.

The team of researchers, led by Greg DeAngelis, Professor in the Department of Brain and Cognitive Sciences at the University of Rochester, has published the findings in the March 20 online issue of the journal Nature.

"It looks as though in this area of the brain, the neurons are combining visual cues and non-visual cues to come up with a unique way to determine depth," says DeAngelis.

DeAngelis says that means the brain uses a whole array of methods to gauge depth. In addition to two-eyed "binocular disparity," the brain makes use of other cues such as motion, perspective, and how objects pass in front of or behind each other to create a representation of the three-dimensional world in our minds.

The researchers say the findings may eventually help instruct children who were born with misalignment of the eyes to restore more normal functions of binocular vision in the brain. The discovery could also help construct more compelling virtual reality environments someday, says DeAngelis, since we have to know exactly how our brains construct three-dimensional percepts to make virtual reality as convincing as possible.

The new neural mechanism is based on the fact that objects at different distances move across our vision with different directions and speeds due to a phenomenon called motion parallax, says DeAngelis. When staring at a fixed object, any motion we make will cause things nearer than the object to appear to move in the opposite direction, and more distant things to appear to move in the same direction.

To figure out the real three-dimensional layout of the scene, DeAngelis says the brain needs one more piece of information and it pulls in this information from the motion of the eyeball itself.

According to DeAngelis, the neurons in the middle temporal area of the brain are combining visual information and physical movement to extract depth information. As the dragon illusion demonstrates, the motion of near and far objects can be confused. But if the eye is moving while tracking the overall movement of the group of objects, it gives the middle temporal neurons enough information to grasp that objects moving across the scene in the same direction as the head must be far away, whereas objects moving in the opposite direction must be close by, says DeAngelis.

"We use binocular disparity, occlusion, perspective, and our own motion all together to create a representation of the real, 3D world in our minds," says DeAngelis.

This research was conducted in collaboration with Jacob W. Nadler and Dora E. Angelaki, at Washington University, and was funded by the National Institutes of Health.




Facebook