Meloria • Ever Better
Search Tools Main Menu

Currents

February 17, 2015

A picture is worth how many emotions?

Log on to Twitter, Facebook or other social media and you will find that much of the content shared with you comes in the form of images, not just words. Images can convey a lot more than a sentence might, and will often provoke emotions in the person viewing them.

Jiebo Luo, associate professor of computer science, in collaboration with researchers at Adobe Research has come up with a more accurate way than currently possible to train computers to be able to digest data that comes in the form of images.

In a paper presented at the American Association for Artificial Intelligence conference in Austin, Texas, the team described what they refer to as a “progressive training deep convolutional neural network.”

In such a system, the trained computer can determine sentiments that the images are likely to elicit. Luo says that the information could be useful for applications as diverse as measuring economic indicators or predicting elections.

A computer analysis of the sentiments expressed in text is a challenging task. In social media, sentiment analysis is more complicated because many people express themselves using images and videos, which are more difficult for a computer to understand.

For example, during a political campaign voters will often share their views through pictures. Two different pictures might show the same candidate, but the image might be making very different political statements. A human could recognize one as being a positive portrait of the candidate (e.g., the candidate smiling and raising his arms) and the other one being negative (e.g., a picture of the candidate looking defeated). But no human could look at every picture shared on social media—it’s truly “big data.” To be able to make informed guesses about a candidate’s popularity, computers need to be trained to digest such data, which is what Luo and his collaborators’ approach can do more accurately than was possible until now.

The researchers treat the task of extracting sentiments from images as an image classification problem—each picture needs to be analyzed and labels applied to it.

To begin the training process, Luo and his collaborators used a large number of Flickr images that have been loosely labeled by a machine algorithm with specific sentiments, in an existing database known as SentiBank (developed by a group at Columbia University). It gives the computer a starting point in understanding what some images can convey. But the machine-generated labels also include a likelihood of that label being true—that is, how sure is the computer that the label is correct? The key step of the training process comes next, when they discard any images for which the sentiment or sentiments with which they have been labeled might not be true. So they use only the “better” labeled images for further training in a progressively improving manner within the framework of the powerful convolutional neural network. Researchers found that the extra step significantly improved the accuracy of the sentiments with which each picture is labeled.

Previous story    Next story