{"id":144542,"date":"2016-03-15T10:59:55","date_gmt":"2016-03-15T14:59:55","guid":{"rendered":"http:\/\/www.rochester.edu\/newscenter\/?p=144542"},"modified":"2016-03-28T09:40:03","modified_gmt":"2016-03-28T13:40:03","slug":"university-of-rochester-team-leads-competition-for-best-image-captions-by-computers","status":"publish","type":"post","link":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/","title":{"rendered":"Paying attention to words, not just images, leads to better captions"},"content":{"rendered":"<h2>University team leads competition for best\u00a0computer-generated captions<\/h2>\n<p>A team of\u00a0University and Adobe researchers is outperforming other approaches to creating computer-generated image captions in an international competition. The key to their winning approach? Thinking about words \u2013 what they mean and how they fit in a sentence structure \u2013 just as much as thinking about the image itself.<\/p>\n<p>The Rochester\/Adobe model mixes the two approaches that are often used in image captioning: the \u201ctop-down\u201d approach, which starts from the \u201cgist\u201d of the image and then converts it into words, and the \u201cbottom-up\u201d approach, which first assigns words to different aspects of the image and then combines them together to form a sentence.<\/p>\n<p>The Rochester\/Adobe model is currently beating Google, Microsoft, Baidu\/UCLA, Stanford University, University of California Berkeley, University of Toronto\/Montreal, and others to top the leaderboard in an image captioning competition run by Microsoft, called the Microsoft COCO Image Captioning Challenge. While the winner of the year-long competition is still to be determined, the Rochester \u201cAttention\u201d system \u2013 or ATT on the leaderboard \u2013 has been leading the field since last November.<\/p>\n<p>Other groups have also tried to combine these two methods by having a feedback mechanism that allows a system to improve on what just one of the approaches would be able to do. However, several systems that tried to blend these two approaches focused on \u201cvisual attention,\u201d which tries to take into account which parts of an image are visually more important to describe the image better.<\/p>\n<p>The Rochester\/Adobe system focuses on what the researchers describe as \u201csemantic attention.\u201d In a paper accepted by the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), entitled <strong><a href=\"http:\/\/arxiv.org\/abs\/1603.03925\" target=\"_blank\">\u201cImage Captioning with Semantic Attention,\u201d<\/a><\/strong>\u00a0computer science professor Jiebo Luo and his colleagues define semantic attention as \u201cthe ability to provide a detailed, coherent description of semantically important objects that are needed exactly when they are needed.\u201d<\/p>\n<figure id=\"attachment_144962\" aria-describedby=\"caption-attachment-144962\" style=\"width: 275px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-144962\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-baby.jpg\" alt=\"baby with a toothbrush\" width=\"275\" height=\"275\" srcset=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-baby.jpg 300w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-baby-32x32.jpg 32w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-baby-64x64.jpg 64w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-baby-96x96.jpg 96w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-baby-128x128.jpg 128w\" sizes=\"auto, (max-width: 275px) 100vw, 275px\" \/><figcaption id=\"caption-attachment-144962\" class=\"wp-caption-text\">Google caption: &#8220;A baby is eating a piece of paper.&#8221;<br \/>Rochester ATT caption: &#8220;A baby with a toothbrush in its mouth.&#8221;<\/figcaption><\/figure>\n<figure id=\"attachment_144972\" aria-describedby=\"caption-attachment-144972\" style=\"width: 275px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-144972\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-cake.jpg\" alt=\"teddy bear cake with candles\" width=\"275\" height=\"275\" srcset=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-cake.jpg 300w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-cake-32x32.jpg 32w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-cake-64x64.jpg 64w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-cake-96x96.jpg 96w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-cake-128x128.jpg 128w\" sizes=\"auto, (max-width: 275px) 100vw, 275px\" \/><figcaption id=\"caption-attachment-144972\" class=\"wp-caption-text\">Google caption: &#8220;A close-up of a plate of food on a table.&#8221;<br \/>Rochester ATT caption: &#8220;A \u00a0table topped with a cake with candles on it.&#8221;<\/figcaption><\/figure>\n<figure id=\"attachment_144982\" aria-describedby=\"caption-attachment-144982\" style=\"width: 275px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-144982\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-food.jpg\" alt=\"a plate of food\" width=\"275\" height=\"275\" srcset=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-food.jpg 300w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-food-32x32.jpg 32w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-food-64x64.jpg 64w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-food-96x96.jpg 96w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/captions-food-128x128.jpg 128w\" sizes=\"auto, (max-width: 275px) 100vw, 275px\" \/><figcaption id=\"caption-attachment-144982\" class=\"wp-caption-text\">Google caption: &#8220;A white plate with a variety of food.&#8221;<br \/>Rochester ATT caption: &#8220;A plate with a sandwich and french fries.&#8221;<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p>\u201cTo describe an image you need to decide what to pay more attention to,\u201d said Luo. \u201cIt is not only about what is in the center of the image or a bigger object, it\u2019s also about coming up with a way of deciding on the importance of specific words.\u201d<\/p>\n<p>For example, take an image that shows a table and seated people. The table might be at the center of the image but a better caption might be \u201ca group of people sitting around a table\u201d instead of \u201ca table with people seated.\u201d Both are correct, but the former one also tries to take into account what might be of interest to readers and viewers.<\/p>\n<p>Computer image captioning brings together two key areas in artificial intelligence: computer vision and natural language processing. For the computer vision side, researchers train their systems on a massive dataset of images, so they learn to identify objects in images. Language models can then be used to put these words together. For the algorithm that Luo and his team used in their system, they also trained their system on many texts. The objective was not only to understand sentence structure but also the meanings of individual words, what words often get used together with these words, and what words might be semantically more important.<\/p>\n<p>The related paper can be found online at <strong><a href=\"http:\/\/arxiv.org\/abs\/1603.03925\" target=\"_blank\">http:\/\/arxiv.org\/abs\/1603.03925<\/a><\/strong>. The Rochester\/Adobe team is formed by Luo;\u00a0doctoral student\u00a0Quanzeng You; and their Adobe collaborators, Hailin Jin, Zhaowen Wang, and Chen Fang. They will present this work as a &#8220;Spotlight&#8221; to the computer vision community at the 2016 CVPR to be held in Las Vegas in late June 2016.<\/p>\n<p>A closely related\u00a0paper on video captioning by Luo, graduate student\u00a0Yuncheng Li,\u00a0and their Yahoo Research colleagues Yale Song,\u00a0Liangliang Cao, Joel Tetreault, andLarry\u00a0 Goldberg.\u00a0<strong><a href=\"https:\/\/drive.google.com\/file\/d\/0B82ZmnI98gjqbWdPTC1pQmZYcFE\/view?usp=sharing\" target=\"_blank\">\u201cTGIF: A New Dataset and Benchmark on Animated GIF Description,\u201d<\/a><\/strong> will also be featured as a &#8220;Spotlight&#8221; presentation\u00a0at CVPR.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A team of\u00a0University and Adobe researchers is outperforming other approaches to creating computer-generated image captions in an international competition. The key to their winning approach? Thinking about words \u2013 what they mean and how they fit in a sentence structure \u2013 just as much as thinking about the image itself.<\/p>\n","protected":false},"author":6,"featured_media":144992,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[116],"tags":[11716,18802,18632,24202],"class_list":["post-144542","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-sci-tech","tag-data-science","tag-department-of-computer-science","tag-hajim-school-of-engineering-and-applied-sciences","tag-jiebo-luo"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Paying attention to words, not just images, leads to better captions<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Paying attention to words, not just images, leads to better captions\" \/>\n<meta property=\"og:description\" content=\"A team of\u00a0University and Adobe researchers is outperforming other approaches to creating computer-generated image captions in an international competition. The key to their winning approach? Thinking about words \u2013 what they mean and how they fit in a sentence structure \u2013 just as much as thinking about the image itself.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/\" \/>\n<meta property=\"og:site_name\" content=\"News Center\" \/>\n<meta property=\"article:published_time\" content=\"2016-03-15T14:59:55+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2016-03-28T13:40:03+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/fea-computer-captioning.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Leonor Sierra\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@leonor_sierra\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Leonor Sierra\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/\"},\"author\":{\"name\":\"Leonor Sierra\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/#\\\/schema\\\/person\\\/b7147819f5697bc51d79e734e5a9efcf\"},\"headline\":\"Paying attention to words, not just images, leads to better captions\",\"datePublished\":\"2016-03-15T14:59:55+00:00\",\"dateModified\":\"2016-03-28T13:40:03+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/\"},\"wordCount\":741,\"image\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/wp-content\\\/uploads\\\/2016\\\/03\\\/fea-computer-captioning.jpg\",\"keywords\":[\"data science\",\"Department of Computer Science\",\"Hajim School of Engineering and Applied Sciences\",\"Jiebo Luo\"],\"articleSection\":[\"Science &amp; Technology\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/\",\"url\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/\",\"name\":\"Paying attention to words, not just images, leads to better captions\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/wp-content\\\/uploads\\\/2016\\\/03\\\/fea-computer-captioning.jpg\",\"datePublished\":\"2016-03-15T14:59:55+00:00\",\"dateModified\":\"2016-03-28T13:40:03+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/#\\\/schema\\\/person\\\/b7147819f5697bc51d79e734e5a9efcf\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/wp-content\\\/uploads\\\/2016\\\/03\\\/fea-computer-captioning.jpg\",\"contentUrl\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/wp-content\\\/uploads\\\/2016\\\/03\\\/fea-computer-captioning.jpg\",\"width\":1000,\"height\":600,\"caption\":\"image of a baby with a tootbrush features the words A BABY EATING A PIECE OF PAPER?\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Paying attention to words, not just images, leads to better captions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/#website\",\"url\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/\",\"name\":\"News Center\",\"description\":\"University of Rochester\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/#\\\/schema\\\/person\\\/b7147819f5697bc51d79e734e5a9efcf\",\"name\":\"Leonor Sierra\",\"description\":\"Leonor Sierra is press officer for science and engineering. She covers computer science, electrical and computer engineering, laboratory for laser energetics, optics, mechanical engineering, physics and astronomy, and the Office of the Dean of Engineering and Applied Sciences.\",\"sameAs\":[\"https:\\\/\\\/x.com\\\/leonor_sierra\"],\"url\":\"https:\\\/\\\/www.rochester.edu\\\/newscenter\\\/author\\\/lsierra\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Paying attention to words, not just images, leads to better captions","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/","og_locale":"en_US","og_type":"article","og_title":"Paying attention to words, not just images, leads to better captions","og_description":"A team of\u00a0University and Adobe researchers is outperforming other approaches to creating computer-generated image captions in an international competition. The key to their winning approach? Thinking about words \u2013 what they mean and how they fit in a sentence structure \u2013 just as much as thinking about the image itself.","og_url":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/","og_site_name":"News Center","article_published_time":"2016-03-15T14:59:55+00:00","article_modified_time":"2016-03-28T13:40:03+00:00","og_image":[{"url":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/fea-computer-captioning.jpg","width":1000,"height":600,"type":"image\/jpeg"}],"author":"Leonor Sierra","twitter_card":"summary_large_image","twitter_creator":"@leonor_sierra","twitter_misc":{"Written by":"Leonor Sierra","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#article","isPartOf":{"@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/"},"author":{"name":"Leonor Sierra","@id":"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/b7147819f5697bc51d79e734e5a9efcf"},"headline":"Paying attention to words, not just images, leads to better captions","datePublished":"2016-03-15T14:59:55+00:00","dateModified":"2016-03-28T13:40:03+00:00","mainEntityOfPage":{"@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/"},"wordCount":741,"image":{"@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/fea-computer-captioning.jpg","keywords":["data science","Department of Computer Science","Hajim School of Engineering and Applied Sciences","Jiebo Luo"],"articleSection":["Science &amp; Technology"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/","url":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/","name":"Paying attention to words, not just images, leads to better captions","isPartOf":{"@id":"https:\/\/www.rochester.edu\/newscenter\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#primaryimage"},"image":{"@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/fea-computer-captioning.jpg","datePublished":"2016-03-15T14:59:55+00:00","dateModified":"2016-03-28T13:40:03+00:00","author":{"@id":"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/b7147819f5697bc51d79e734e5a9efcf"},"breadcrumb":{"@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#primaryimage","url":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/fea-computer-captioning.jpg","contentUrl":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2016\/03\/fea-computer-captioning.jpg","width":1000,"height":600,"caption":"image of a baby with a tootbrush features the words A BABY EATING A PIECE OF PAPER?"},{"@type":"BreadcrumbList","@id":"https:\/\/www.rochester.edu\/newscenter\/university-of-rochester-team-leads-competition-for-best-image-captions-by-computers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.rochester.edu\/newscenter\/"},{"@type":"ListItem","position":2,"name":"Paying attention to words, not just images, leads to better captions"}]},{"@type":"WebSite","@id":"https:\/\/www.rochester.edu\/newscenter\/#website","url":"https:\/\/www.rochester.edu\/newscenter\/","name":"News Center","description":"University of Rochester","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.rochester.edu\/newscenter\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/b7147819f5697bc51d79e734e5a9efcf","name":"Leonor Sierra","description":"Leonor Sierra is press officer for science and engineering. She covers computer science, electrical and computer engineering, laboratory for laser energetics, optics, mechanical engineering, physics and astronomy, and the Office of the Dean of Engineering and Applied Sciences.","sameAs":["https:\/\/x.com\/leonor_sierra"],"url":"https:\/\/www.rochester.edu\/newscenter\/author\/lsierra\/"}]}},"_links":{"self":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts\/144542","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/comments?post=144542"}],"version-history":[{"count":14,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts\/144542\/revisions"}],"predecessor-version":[{"id":146972,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts\/144542\/revisions\/146972"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/media\/144992"}],"wp:attachment":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/media?parent=144542"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/categories?post=144542"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/tags?post=144542"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}