University of Rochester

Rochester Review
March–April 2012
Vol. 74, No. 4

pdf image
Story as a PDF

Departments

Review home

Features

The Mind’s EyeHow do we transform an ever-changing jumble of visual stimuli into the rich and coherent three-dimensional perception we know as sight? Rochester vision scientists are helping reshape our understanding of how the brain ‘sees.’By Susan Hagen
brainscienceWIDE ANGLE: Using a 7-foot-tall semicircular screen that encompasses a viewer’s entire field of vision, David Knill and other Rochester scientists explore how the brain makes sense of information involving peripheral vision and other cognitive processes of perception. (Photo: Adam Fenster)

By the time James Risen arrived at the Napa Valley hotel his wife had booked in celebration of his 60th birthday, he knew something was terribly wrong. Without warning or pain, the right side of his field of vision had gone blank, like someone had pulled a curtain over the area.

“I could only see about half of my normal vision,” he recalls. “It was like not getting the whole picture.”

As he would soon learn from emergency room doctors, “The problem was not with my eyes. There was a problem with my brain.”

Risen had experienced a stroke that damaged his visual cortex, causing blindness on the right side in both eyes. It’s a common complication, estimated to affect up to 50 percent of people who suffer a stroke, and extremely disorienting.

“Every time I opened my eyes I was reminded that I had a severe visual problem,” Risen says. When walking in crowded areas, people would just pop into sight, as if from nowhere, because he had no ability to detect objects or movement peripherally on the right side. Taking a hike in the woods was out of the question. “I might run into a tree or step in a pothole,” he says.

Even more unsettling was the message he received from his first visit to a neuro-ophthalmologist. The brain cells that process that portion of his vision were dead and doctors could do nothing to restore his sight. He was advised to adjust: stop driving, sell his house, and move downtown where he could catch a bus to his job as an administrator for a law firm in Columbus, Ohio.

For Risen, the loss of independence was “frightening.” “I was very depressed.”

Not long afterward, Risen became a participant in a University research program on human vision and began the long road to recovery. In the process, he also became part of the growing number of discoveries at Rochester that are helping to reshape our understanding of how the brain “sees.” Using investigative tools that range from a room-sized virtual reality environment to microscopic electronic probes, scientists are exploring how our brains are able to transform the jumble of competing and rapidly changing sensory inputs from our eyes into the rich and coherent three-dimensional perception we know as sight. Their insights are helping to build a better appreciation for the brain’s plasticity and leading to the development of life-altering vision therapies.

It’s not surprising that Risen would land in Rochester for the latest in vision discoveries. The city’s moniker is the World’s Image Center and Rochester is home to Kodak, Bausch & Lomb, and Xerox, companies focused on optical engineering and optical systems, many of which are developed for use with the eye. Today, even as these corporations downsize, the city boasts the headquarters of more than 80 businesses focused on optics and imaging.

brainsciencePARALLAX PARADOX: Greg DeAngelis is working to pinpoint the areas of the brain responsible for motion parallax—our ability to discern our three-dimensional relationship to objects around us based on our own motion and distance from the objects. (Photo: Adam Fenster)

For almost a half century, the University’s Center for Visual Science has coupled this local expertise with the skills of researchers across disparate disciplines. Center founder Robert Boynton was a professor of both psychology and optics, two very different fields merged under the rubric of vision. The center brings together 32 faculty members from engineering, optics, neurology, ophthalmology, brain and cognitive sciences, and neurobiology and anatomy. Through funding from the National Eye Institute and the Office of Naval Research, the center provides access to shared experimental facilities and to technical experts like Keith Parkins, one of its senior programmers who creates computer code for everything from 3-D and head-mounted displays to see-through augmented reality systems.

“It’s a kind of beautiful synergy between basic science, engineering, and medicine—all three,” says David Williams, center director for the past 21 years and the dean for research for Arts, Sciences, and Engineering. “There is actually a pretty big cultural gulf between these enterprises,” he says. In most universities, engineers would have little experience with patients, and physicians, little exposure to equipment design and basic science. But through the center, clinicians, researchers, and designers meet regularly to share experimental results, ideas, and sometimes even study participants.

The center is a recognized leader in vision research with its members publishing in journals like Nature, Current Biology, Nature Neuroscience, and the Journal of Neuroscience. If the findings that flow out of this collaboration confirm one thing, it’s that the abilities we take for granted—like sight, depth perception, and hand-eye coordination—are some of the most biologically complex tasks that we undertake as humans.

“More than 50 percent of the cortex, the surface of the brain, is devoted to processing visual information,” points out Williams, the William G. Allyn Professor of Medical Optics. “Understanding how vision works may be a key to understanding how the brain as a whole works.”

“When scientists back in the 1950s met to talk about artificial intelligence, they thought that teaching a computer to play chess would be very difficult, but teaching a computer to see would be easy,” says center member David Knill, professor of brain and cognitive sciences.

“Why? Because chess is hard for humans. Only the rare human with lots of practice becomes a master. But seeing appears easy for us. Even a baby can see. For that matter, insects, birds, and fish can see—albeit differently than humans. Some see better, in fact.”

What researchers now know is that human vision is incredibly complicated. While we’ve developed software that can beat the pants off the best chess master and best our brightest at Jeopardy!, computer models have barely scratched the surface of human vision.

“We mistakenly think of human vision like a camera,” says Knill. “We have this metaphor of an image being cast on the retina and we tend to think of vision as capturing images and sending them to the brain, like a video camera recording to a digital tape.”

But human vision is more akin to speech than photography. From infancy, our brain learns how to construct a three-dimensional environment by interpreting visual sensory signals like shape, size, and occlusion, how objects that are close obstruct the view of objects farther away. Even nonvisual cues, such as sounds and self-motion help us understand how we move in space and how to move our bodies accordingly.

“We learn to see,” says Knill. “It’s something we have spent our lives learning to do, so we can’t imagine not understanding what we are seeing.”

That sight is constantly adapting underpins some of the most exciting discoveries in vision science at Rochester. For example, scientists have long assumed that an individual’s basic visual sensitivity, such as the ability to discern slight differences in shades of gray, was fixed. Not so, found Daphne Bavelier, professor of brain and cognitive sciences. In a series of ongoing studies on the effects of playing video games on visual perception, Bavelier has shown that very practiced action gamers become 58 percent better at perceiving fine differences in contrast. Such visual discrimination, she says, is the primary limiting factor in how well a person can see.

“Normally, improving contrast sensitivity means getting glasses or eye surgery—somehow changing the optics of the eye,” says Bavelier. “But we’ve found that action video games train the brain to process the existing visual information more efficiently, and the improvements last up to years after game play stopped.”

More recently, Bavelier and Rochester cognitive scientist Alexandre Pouget found that playing action video games can also train the mind to make the right decisions faster. Video game players in their study developed a heightened sensitivity to what was going on around them, a benefit that could spill over into such everyday activities as driving, reading small print, keeping track of friends in a crowd, and navigating around town.

brainscienceLONG VIEW: As Krystel Huxlin (standing) and neuroscience graduate student Anasuya Das look on, Maurice DeMay of Rochester demonstrates the peripheral vision exercises he does to strengthen his visual abilities after a stroke damaged his visual cortex. (Photo: Adam Fenster)

“It’s not the case that the action game players are trigger-happy and less accurate: They are just as accurate and also faster,” Bavelier says. “Action game players make more correct decisions per unit time. If you are a surgeon or you are in the middle of a battlefield, that can make all the difference.”

Building on Bavelier’s discovery that video gaming can teach the visual cortex to make better use of the information it receives, Bavelier and Knill have begun research on how to retrain stereopsis, the brain’s ability to perceive depth by combining the slightly disparate views it receives from each eye, in patients who are stereo-blind. Like the effects in 3-D movies, stereopsis is what makes a solid object seem to “pop out” and underlies our ability to judge distances very precisely, such as when we thread a needle or hit a ball, says Knill.

An expert on depth perception, Knill studies how the brain uses such visual cues to control our behavior in the world. How, for example, does the brain incorporate information from shape, size, shadow, orientation, and position of objects to guide hand movements? What signals allow us to know exactly how far away a cup is on the table, and to grasp for it with such amazing accuracy?

For the stereopsis study, Indu Vedamurthy, a postdoctoral fellow in the center, has designed a 3-D computer game using computer animations, two-way mirrors, and eye-tracking devices, in collaboration with Bavelier and Knill. Up to six days a week for an hour each time, study participants who have poor stereovision do their best to squash a virtual frog. The catch is that the game removes all the other reminders that we typically rely on for depth, like perspective and relative speed and motion, and requires the player to rely solely on stereoscopic cues to judge the frog’s location. The team is hopeful that by forcing participants to focus on these cues, they will strengthen their ability to perceive depth.

Greg DeAngelis also explores depth perception but at the basic biological level of single neurons. The professor and chair of brain and cognitive sciences is an expert on motion parallax, a depth cue that rises out of the viewer’s own movements.

With motion parallax, the direction and speed an object moves on the retina is directly related to its distance from the viewer. As we move, near objects seem to move in the opposite direction of our head, while further away objects move with us. “Motion parallax cues are driven by the geometry of the viewing, so it is potentially a very precise measure of distance and a powerful cue to depth,” DeAngelis says.

“The challenge for us was to understand where in the brain there are neurons that can actually extract information about depth from motion parallax, and until a few years ago, nobody knew.” To solve the puzzle, his team created a virtual reality system with an animation that simulated the movement of objects but in a pattern that was ambiguous unless the viewer moved from side to side. They then measured the firing of neurons in the middle temporal area of the brain, a small area known for processing visual motion.

When individual neurons in this region received only the visual cues from the animation, they fired indiscriminately. But when signals from the movement of the eyes were added, the neurons fired in a way consistent with the three-dimensional layout of the scene.

The experiment demonstrates, says DeAngelis, how single neurons in the brain combine visual images with information about the movement of the eyes to compute depth. Our perception of three dimensions does not rely solely on visual features like shape or occlusion or even on binocular vision.

“The brain uses lots of other signals to make sense of the visual input and one of those is the movement of the eyes,” he says.

“We’ve learned a lot about the function of different areas of the brain over the years by observing humans with brain damage from lesions and strokes,” he says. But such nerve cell loss is typically not confined to a specific region. His lab is able to temporarily inactivate tiny areas of the cerebral cortex only 1 millimeter in diameter, then observe and map the functions of discrete areas with precision.

Such advances, DeAngelis anticipates, will help to decode how the brain understands even more complicated aspects of depth, like the perception of undulating surfaces and their orientation to the viewer.

brainscience (Illustration: Steve Boerner)

Insights into visual perception are important to understanding who we are as a species, researchers say.

“Humans are very visually dominated creatures,” says DeAngelis. “If you compare humans to mice, mice have pretty lousy vision. They rely on whisking, and tremendously on olfaction. Not that our other senses are not important, but a lot of our behavior, like the ability to manipulate things with our hands and work with tools, relies heavily on vision.”

After losing half of that ability, Risen couldn’t agree more. He came to Rochester to work with Krystel Huxlin, an associate professor of ophthalmology and of brain and cognitive sciences who has pioneered the use of vision exercises to help restore sight lost from brain damage caused by a stroke. “The brain is like a big muscle, in its own way, and it requires exercise, and if you want to recover functions, you have to exercise it,” says Huxlin.

The use of brain therapy was a radical idea in medical and scientific circles not too long ago, one that was met with considerable skepticism. Once nerve cells die, they don’t come back, no matter how much they are stimulated.

But Huxlin’s work has not only shown improvement in vision, it’s also helping scientists better understand the brain’s powerful ability to relearn a skill using alternative neural pathways if given the right coaching.

To rebuild peripheral visual perception, study participants stare at a tiny target in the center of a computer screen while a quarter-sized pattern of moving dots flash for half a second in their blind field. Without glancing at the moving pattern, participants try to distinguish in which direction the dots are drifting. A second exercise uses a circle of bars. The goal is to identify whether the bars are oriented vertically or horizontally.

Compared to running laps and lifting weights, leaning on a chin rest and staring at dots doesn’t sound exactly taxing. Wrong, say Risen and others participants.

“It is very tedious, and it’s focus, focus,” says Risen, who has done the exercises five days a week at home for the past 18 months. “It’s very easy to cheat even if you don’t want to” by inadvertently looking at the moving dots, he says. The sessions include 300 trials, two times a day, a process that takes about an hour. Progress requires months of consistent practice. “If I worked out as much as I do this, I’d be an Adonis,” he says.

“The reason this works is because we are hammering at the exact same spot in the visual field, and at the same neural circuits, over and over again,” says Huxlin. Although the stroke has destroyed the cells that typically transmit visual signals, other, weaker pathways also carry visual stimuli. “What we think is happening is that the training is basically reawakening or driving these alternative pathways harder to the point that the information then reaches consciousness.”

Once the brain recovers the ability to detect motion stimuli from the exercises, most other aspects of vision recover automatically, she says.

But does the improvement that Huxlin is able to measure precisely on the computer screen translate to a real-world ability to understand the three-dimensional world? That’s one of the questions Knill is working with Huxlin to explore.

To study individuals with vision damage similar to Risen’s, Laurel Issen, a graduate student working with Knill and Huxlin, employs a virtual reality system. Participants sit in front of a 7-foot-tall, semicircular screen that encompasses their entire of field of vision. There, they experience a pattern of moving dots. Think of sci-fi movie animations, in which space explorers fly through an asteroid field, says Knill. The dots move past in ways that simulate physical movement in a certain direction.

The beauty of this elaborate setup, he notes, is that researchers can manipulate the pattern of dots in the subject’s blind field. Eventually Knill and Issen plan to test participants before they begin Huxlin’s regimen of eye exercises, and again after months of therapy to document improvements in the damaged areas.

In the meantime, Risen is thrilled with the personal measures of his recovery. He’s experienced “significant improvement” in his vision and his “life is much easier now. I’m more comfortable in my environment.” Last

September he officially cleared the hurdle he had been dreading for three years. He passed the peripheral vision test for his driver’s license, which involves being able to detect a flash of light to the side.

“When I saw that light, I was the happiest man in the world,” he says.


Susan Hagen writes about the social sciences for University Communications.