University of Rochester

Whether High Tea or Takeout, This Robot's Always Socially Correct

November 18, 1993

The Rochester Robot's suave sophistication might impress Miss Manners herself: Wine glasses -- must be a formal dinner. No linen napkins -- don't expect an elegant meal. This robot hovers over a dining table, assesses a variety of visual cues, and draws conclusions.

University of Rochester graduate student Ray Rimey is the man responsible for putting the social polish into the Rochester Robot, though his purpose wasn't to build the world's best automated place-setting analyst. Rather, Rimey wanted to find a solution to a problem vexing everyone in artificial intelligence: teaching a computer how to scan a scene and zero in on the most important information, an ability with important applications such as medical diagnostics and satellite image analysis.

The dinner table proved to be a perfect test. Rimey taught the robot to analyze and make conclusions about different types of place settings. As it collects visual clues the robot gains confidence in its answer, which it flashes every few seconds on its computer screen. The robot, for example, can tell whether the dinner is formal or informal. Whether it's breakfast, lunch, dinner or dessert. The method extends to making such judgments as whether the table is messy, how many guests are coming for dinner, and whether they have begun eating yet.

Rimey "taught" the robot these tricks through some extensive programming using decision theory and mathematical constructs known as Bayes Nets. The computer vision system he built, dubbed TEA-1, is described in an upcoming issue of the International Journal of Computer Vision.

"A computer has only so much processing power," says Rimey, who earned his Ph.D. for this work. "If it needs to solve a certain problem in a given amount of time, it has to prioritize. Given a certain situation, where does the robot look next?"

The problem is not unlike that a doctor faces when diagnosing a patient, Rimey says. "You could run endless tests for thousands of dollars, but that would take a long time and the person may die." Instead, a good doctor uses knowledge efficiently to decide on a sequence of tests that maximizes information and minimizes time until treatment. "We're trying to automate a similar process."

Indeed, the applications of such work include analysis of medical images. One company is applying decision theory to computer vision to analyze aerial images. Honeywell is interested in using Rimey's system to analyze infrared images taken by roving vehicles.

Other University of Rochester students have taken Rimey's work and are applying it to other situations. Peter Von Kaenel has asked the computer to solve visual problems in a world of objects in motion (trains and wandering cows), while Robert Wisniewski investigates decisions involving both observing and interacting with moving objects (herding mechanical sheep).

Many computer scientists agree that the world is just too complex to enable a robot to see every detail of its environment at all times, and then act. So a robot must be selective in where it puts its attention. This is a step that is often overlooked by artificial intelligence researchers, says Christopher Brown, professor of computer science and adviser to Rimey.

"Rimey has built a flexible control structure for vision," says Brown. "Most computer vision work develops methods of image processing. But when it comes to which methods the computer should do in what order -- well, that's usually programmed into the application. Rimey has built a general framework for using prior knowledge and information discovered along the way to choose the best method to apply next."

The Rochester computer vision team takes an approach known as active vision, where a robot uses sensory input from its environment to simplify its tasks and enhance its abilities. It is one of four U.S. university teams that are part of a robot vision consortium based in Europe which is using the table setting problem for a variety of projects. Rimey's work was supported by the National Science Foundation and the Advanced Research Projects Agency. tr

Note to editors: Color slides and black-and-white photographs are available.