Skip to content
Science & Technology

Microscopic eye movements affect how we see contrast

Michele Rucci, professor of brain and cognitive sciences, with equipment he uses to study small movements that a person is not even aware of making. These small eye movements, once thought to be inconsequential, are critical to the visual system in helping us reconstruct a scene. (University of Rochester photo / J. Adam Fenster)

It is often difficult for a driver to see a person walking on the side of the road at night—especially if the person is wearing dark colors. One of the factors causing this difficulty is a decrease in contrast, making it hard to segment an object, such as a person, from its background. Small eye movements may be crucial to observing contrast.

Researchers previously believed contrast sensitivity function—the minimum level of black and white that a person needs to detect a pattern—was mainly dictated by the optics of the eye and processing in the brain. Now, in a study published in the journal eLife, researchers, including Michele Rucci at the University of Rochester, explain that there is another factor at play: contrast sensitivity also depends on small eye movements that a person is not even aware of making.

“Historically these movements have been pretty much ignored,” says Rucci, a professor of brain and cognitive sciences at Rochester. “But what seems to be happening is that they are contributing to vision in a number of different ways, including our contrast sensitivity function.”

 

an animated gif showing a eye chart, with the contrast between the letters and the background growing increasingly faint
TEST YOUR CONTRAST SENSITIVITY: The Pelli-Robson test is one type of test to measure contrast sensitivity. The test consists of a chart with six letters per line, and contrast varying from high at the top left to low at the bottom right. To test your contrast sensitivity function, read the letters starting with the highest contrast in the top left, until you are unable to read two or three letters in a single group. Each group has three letters of the same contrast level.

 

When we fix our eyes on a single point, the world may appear stable, but at the microscopic level, our eyes are constantly jittering. These small eye movements, once thought to be inconsequential, are critical to the visual system in helping us reconstruct a scene, Rucci says. “Some scientists believed that because they are so small, the eye movements might not have much impact, but compared to the size of the photoreceptors on the retina, they are huge, and they are changing the input on the retina.”

Think of a scene or object like a computer image made up of different pixels, or points. Each point is a different color, intensity, luminance, and so on. Our eyes take in signals from each of the points and project the signals onto photoreceptors on the retina: the arrangement of these points makes a spatial pattern that we perceive as a scene or object. But, if a spatial pattern is projected as a stationary image, it will fade from view once the retina’s photoreceptors become desensitized to the signal—like a student who becomes bored in class if the teacher repeats the same information over and over again.

Researchers have long known that the tiny eye movements—always jittering and taking in different points—continually change the signal to the retina and refresh the image so it does not fade. However, the new research, which was funded by the National Eye Institute, the National Science Foundation, and the Harvard/MIT Joint Research Program, suggests that these movements do more than prevent fading; they are one of the very mechanisms by which the visual system functions, Rucci says. “The way the visual system encodes information is based on these temporal changes. Eye movements transform a spatial pattern into temporal changes on the retina.”

The system is similar to that involved in the sense of touch: to glean information about the surface of a solid object, we do not simply place our fingertips on the surface, but also move them along the object. We are able to perceive the object based on the interaction between a sensory process (the tactile receptors in our fingers) and a motor process (the way we move our fingertips). “Since our eyes are never at rest even when we fixate a point in the visual scene, a similar mechanism holds for vision,” says Antonino Casile, a researcher at the Istituto Italiano di Tecnologia (Italian Institute of Technology) and a co-author of the paper. “Contrast sensitivity results from the interaction of two processes: a sensory process—the response properties of neurons in the early visual system—and a motor process.”

Measuring eye movements and contrast sensitivity

In order to measure contrast sensitivity and whether or not eye movements play a role, the researchers showed human participants gratings with black and white stripes. The researchers gradually varied the width of the stripes, making them “thinner and thinner, until the participants eventually said they no longer saw separate bars,” Rucci says. The width of the bars is known as the spatial frequency. For each spatial frequency, researchers measured the minimum level of black and white that participants needed to be able to see a contrast, while, at the same time, carefully measuring their eye movements.

Small eye movements may be crucial to seeing contrast variations

To measure contrast sensitivity, the researchers showed human participants gratings with black and white stripes of different widths (known as spatial frequency, image A). For each frequency, they determined the minimum amount of contrast (separation from black and white, image B) that would enable the subjects to still see the grating pattern. The resulting function of spatial frequency is known as contrast sensitivity function. You can see your own contrast sensitivity function in image C, where frequency varies on the horizontal axis and contrast varies on the vertical axis.

variations in contrast can be seen by small eye movements

The researchers then simulated this task in a computer model of the retina to see if the responses of neurons in the retina matched the human subjects’ contrast sensitivity. “We found that they are only compatible when we include the motion of the eye movements,” Rucci says. “When we don’t include this movement factor in the computer model, the simulated neurons don’t give the same responses that the subjects do.”

Knowing that eye movements do affect contrast sensitivity, researchers are able to input this factor into models of human vision, providing more accuracy in understanding exactly how the visual system processes information—and what can go wrong when the visual system fails. The research also highlights that movement and motor behavior may be more fundamental to vision than previously thought, Rucci says. “Vision isn’t just taking an image and processing it via neurons. The visual system uses an active scheme to extract and encode information. We see because our eyes are always moving, even if we don’t know it.”