{"id":224862,"date":"2017-03-10T14:56:07","date_gmt":"2017-03-10T19:56:07","guid":{"rendered":"http:\/\/www.rochester.edu\/newscenter\/?p=224862"},"modified":"2024-10-29T15:14:35","modified_gmt":"2024-10-29T19:14:35","slug":"machine-learning-advances-human-computer-interaction","status":"publish","type":"post","link":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/","title":{"rendered":"Machine learning advances human-computer interaction"},"content":{"rendered":"<div class=\"embed-container\"><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/lF_tM9lwrCA\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/div>\n<p><em>A natural language model developed in the Robotics and Artificial Intelligence Laboratory allows a user to speak a simple command, which the robot can translate into an action. If the robot is given a command to pick up a particular object, it can differentiate between other objects nearby, even if they are identical in appearance.<\/em><\/p>\n<p id=\"Baxter\">Inside the University of Rochester\u2019s Robotics and Artificial Intelligence Laboratory, a robotic torso looms over a row of plastic gears and blocks, awaiting instructions. Next to him, Jacob Arkin \u201913, a doctoral candidate in electrical and computer engineering, gives the robot a command: \u201cPick up the middle gear in the row of five gears on the right,\u201d he says to the Baxter Research Robot. The robot, sporting a University of Rochester winter cap, pauses before turning, extending its right limb in the direction of the object.<\/p>\n<p><a href=\"http:\/\/www.rochester.edu\/news\/unlocking-big-data\/\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-220142 size-full\" style=\"border: none;\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/02\/dandelion-data-science-logo.jpg\" alt=\"illustration of dandelion with data as seeds\" width=\"400\" height=\"214\" \/><\/a><\/p>\n<h2 class=\"lighter\">Unlocking big data<\/h2>\n<p>&nbsp;<\/p>\n<h3 class=\"lighter\">A Newscenter series on how Rochester is using data science to change how we research, how we learn, and how we understand our world.<\/h3>\n<p>&nbsp;<\/p>\n<p>Baxter, along with other robots in the lab, is learning how to perform human tasks and to interact with people as part of a human-robot team. \u201cThe central theme through all of these is that we use language and machine learning as a basis for robot decision making,\u201d says Thomas Howard \u201904, an assistant professor of electrical and computer engineering and director of the University\u2019s robotics lab.<\/p>\n<p id=\"Turing\">Machine learning, a subfield of artificial intelligence, started to take off in the 1950s, after the British mathematician Alan Turing published a revolutionary paper about the possibility of devising machines that think and learn. His famous Turing Test assesses a machine&#8217;s intelligence by determining that if a person\u00a0is unable to distinguish a machine from a human being, the machine has\u00a0real intelligence.<\/p>\n<p>Today, machine learning provides computers with the ability to learn from labeled examples and observations of data\u2014and to adapt when exposed to new data\u2014instead of having to be explicitly programmed for each task. Researchers are developing computer programs to build models that detect patterns, draw connections, and make predictions from data to construct informed decisions about what to do next.<\/p>\n<p>The results of machine learning are apparent everywhere, from Facebook\u2019s personalization of each member\u2019s NewsFeed, to speech recognition systems like Siri, e-mail spam filtration, financial market tools, recommendation engines such as Amazon and Netflix, and language translation services.<\/p>\n<figure id=\"attachment_225722\" aria-describedby=\"caption-attachment-225722\" style=\"width: 400px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robot.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-225722 size-full\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robot.jpg\" alt=\"man with robot holding a coffee cup in front of him\" width=\"400\" height=\"367\" \/><\/a><figcaption id=\"caption-attachment-225722\" class=\"wp-caption-text\">Thomas Howard is pictured with a Baxter robot in his lab in Gavett Hall (University photo \/ J. Adam Fenster)<\/figcaption><\/figure>\n<p id=\"Howard\">Howard and other University professors are developing new ways to use machine learning to provide insights into the human mind and to improve the interaction between computers, robots, and people.<\/p>\n<p>With Baxter, Howard, Arkin, and collaborators at MIT developed mathematical models for the robot to understand complex natural language instructions. When Arkin directs Baxter to \u201cpick up the middle gear in the row of five gears on the right,\u201d their models enable the robot to quickly learn the connections between audio, environmental, and video data, and adjust algorithm characteristics to complete the task.<\/p>\n<p>What makes this particularly challenging is that robots need to be able to process instructions in a wide variety of environments and to do so at a speed that makes for natural human-robot dialog. The <strong><a href=\"http:\/\/www.roboticsproceedings.org\/rss12\/p37.html\">group\u2019s research<\/a><\/strong> on this problem led to a <strong><a href=\"http:\/\/rss2016.engin.umich.edu\/awards.html\">Best Paper Award<\/a><\/strong> at the Robotics: Science and Systems 2016 conference.<\/p>\n<p>By improving the accuracy, speed, scalability, and adaptability of such models, Howard envisions a future in which humans and robots perform tasks in manufacturing, agriculture, transportation, exploration, and medicine cooperatively, combining the accuracy and repeatability of robotics with the creativity and cognitive skills of people.<\/p>\n<p>\u201cIt is quite difficult to program robots to perform tasks reliably in unstructured and dynamic environments,\u201d Howard says.\u00a0 \u201cIt is essential for robots to accumulate experience and learn better ways to perform tasks in the same way that we do, and algorithms for machine learning are critical for this.\u201d<\/p>\n<div class=\"embed-container\"><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/dawH-t3MGvc\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/div>\n<p><em>Jake Arkin, PhD student in electrical and computer engineering, demonstrates a natural language model for training a robot to complete a particular task.<\/em><\/p>\n<h2><strong>Using machine learning to make predictions<\/strong><\/h2>\n<p id=\"Luo\">A photograph of a stop sign contains visual patterns and features such as color, shape, and letters that help human beings identify it as a stop sign. In order to train computers to identify a person or an object, the computer needs to see these features as unique patterns of data.<\/p>\n<p>\u201cFor human beings to recognize another person, we take in their eyes, nose, mouth,\u201d says Jiebo Luo, an associate professor of computer science. \u201cMachines do not necessarily \u2018think\u2019 like humans.\u201d<\/p>\n<p>While Howard creates algorithms that allow robots to understand spoken language, Luo employs the power of machine learning to teach computers to identify features and detect configurations in social media images and data.<\/p>\n<p>\u201cWhen you take a picture with a digital camera or with your phone, you\u2019ll probably see little squares around everyone\u2019s faces,\u201d Luo says. \u201cThis is the kind of technology we use to train computers to identify images.\u201d<\/p>\n<p>Using these advanced computer vision tools, Luo and his team train artificial neural networks\u2014a technology of machine learning\u2014to enable computers to sort online images and to determine, for instance, <strong><a href=\"https:\/\/www.rochester.edu\/newscenter\/a-picture-is-worth-1000-words-but-how-many-emotions-89012%E2%80%AC\/\">emotions in images<\/a><\/strong>, <strong><a href=\"https:\/\/www.rochester.edu\/newscenter\/new-technology-can-mine-data-from-instagram-to-monitor-and-understand-teenage-drinking-patterns-126442\/\">underage drinking patterns<\/a><\/strong>, and <strong><a href=\"https:\/\/www.rochester.edu\/newscenter\/what-twitter-and-data-science-tell-us-about-the-2016-election-218762\/\">trends in presidential candidates\u2019 Twitter followers<\/a>.<\/strong><\/p>\n<p>Artificial neural networks mimic the neural networks in the human brain in identifying images or parsing complex abstractions by dividing them into different pieces and making connections and finding patterns. However, machines do not convey actual images as a human being would see an image; the pieces are converted into data patterns and numbers, and the machine learns to identify these through repeated exposure to data.<\/p>\n<p>\u201cEssentially everything we do is machine learning,\u201d Luo says. \u201cYou need to teach the machine many times that this is a picture of a man, this is a woman, and it eventually leads it to the correct conclusion.\u201d<\/p>\n<figure id=\"attachment_225102\" aria-describedby=\"caption-attachment-225102\" style=\"width: 630px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-225102\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/machine-learning-infographic-630x378.jpeg\" alt=\"\" width=\"630\" height=\"378\" srcset=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/machine-learning-infographic-630x378.jpeg 630w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/machine-learning-infographic-193x117.jpeg 193w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/machine-learning-infographic-768x461.jpeg 768w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/machine-learning-infographic.jpeg 1000w\" sizes=\"auto, (max-width: 630px) 100vw, 630px\" \/><figcaption id=\"caption-attachment-225102\" class=\"wp-caption-text\"><em>A photograph of a stop sign contains visual patterns and features such as color, shape, and letters that help human beings identify it as a stop sign. Machines, however, convert images into data patterns and numbers, draw connections, and make predictions from data to make decisions. 10 percent of the probabilities might label the image a kite, 5 percent an apple, and 85 percent a stop sign, and the machine would conclude the object is a stop sign. (University graphic \/ Michael Osadciw<\/em>)<\/figcaption><\/figure>\n<h2><strong>Cognitive models and machine learning<\/strong><\/h2>\n<p>If a person sees an object she\u2019s never seen before, she will use her senses to determine various things about the object. She might look at the object, pick it up, and determine it resembles a hammer. She might then use it to pound things.<\/p>\n<p>\u201cSo much of human cognition is based on categorization and similarity to things we have already experienced through our senses,\u201d says Robby Jacobs, a professor of brain and cognitive sciences.<\/p>\n<p>While artificial intelligence researchers focus on building systems such as <strong><a href=\"#Baxter\">Baxter<\/a><\/strong> that interact with their surroundings and solve tasks with human-like intelligence, cognitive scientists use data science and machine learning to study how the human brain takes in data.<\/p>\n<p>\u201cWe each have a lifetime of sensory experiences, which is an amazing amount of data,\u201d Jacobs says. \u201cBut people are also very good at learning from one or two data items in a way that machines cannot.\u201d<\/p>\n<p>Imagine a child who is just learning the words for various objects. He may point at a table and mistakenly call it a chair, causing his parents to respond, \u201cNo that is not a chair,\u201d and point to a chair to identify it as such. As the toddler continues to point to objects, he becomes more aware of the features that place them in distinct categories. Drawing on a series of inferences, he learns to identify a wide variety of objects meant for sitting, each one distinct from others in various ways.<\/p>\n<p>This learning process is much more difficult for a computer. Machine learning requires subjecting it to many sets of data in order to constantly improve.<\/p>\n<p>One of Jacobs\u2019 projects involves printing novel plastic objects using a 3-D printer and asking people to describe the items visually and haptically (by touch). He uses this data to create computer models that mimic the ways humans categorize and conceptualize the world. Through these computer simulations and models of cognition, Jacobs studies learning, memory, and decision making, specifically how we take in information through our senses to identify or categorize objects.<\/p>\n<p>\u201cThis research will allow us to better develop therapies for the blind or deaf or others whose senses are impaired,\u201d Jacobs says.<\/p>\n<figure id=\"attachment_224892\" aria-describedby=\"caption-attachment-224892\" style=\"width: 630px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-224892\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/Robby-Jacobs_fribbles-630x157.jpg\" alt=\"\" width=\"630\" height=\"157\" srcset=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/Robby-Jacobs_fribbles-630x157.jpg 630w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/Robby-Jacobs_fribbles-768x192.jpg 768w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/Robby-Jacobs_fribbles.jpg 774w\" sizes=\"auto, (max-width: 630px) 100vw, 630px\" \/><figcaption id=\"caption-attachment-224892\" class=\"wp-caption-text\"><em>Jacobs prints novel plastic objects\u2014called &#8220;Fribbles&#8221;\u2014on a 3-D printer and asks people to describe these through sight and through touch. He uses this data to create computer models. The top row of this figure shows computer-generated images of Fribbles, while the bottom row shows photographs of the physical objects. (University image \/ Robby Jacobs)<\/em><\/figcaption><\/figure>\n<h2><strong>Machine learning and speech assistants<\/strong><\/h2>\n<p>Many people cite glossophobia\u2014the fear of public speaking\u2014as their greatest fear.<\/p>\n<p>Ehsan Hoque and his colleagues at the University\u2019s Human-Computer Interaction Lab have developed computerized speech assistants to help combat this phobia and improve speaking skills.<\/p>\n<p>When we talk to someone, many of the things we communicate\u2014facial expressions, gestures, eye contact\u2014aren\u2019t registered by our conscious minds. A computer, however, is adept at analyzing this information.<\/p>\n<p>\u201cI want to learn about the social rules of human communication,\u201d says Hoque, an assistant professor of computer science and head of the Human-Computer Interaction Lab. \u201cThere is this dance going on when humans communicate: I ask a question; you nod your head and respond. We all do the dance but we don\u2019t always understand how it works.\u201d<\/p>\n<p>In order to better understand this dance, Hoque developed computerized assistants that can sense a speaker\u2019s body language and nuances in presentation and use those to help the speaker improve her communication skills. These systems include <strong><a href=\"https:\/\/www.rochester.edu\/newscenter\/conversing-computer-may-fight-fear-public-speaking-168122\/\">ROCSpeak<\/a><\/strong>, which analyzes word choice, volume, and body language; <strong><a href=\"https:\/\/www.rochester.edu\/newscenter\/wearable-technology-can-help-with-public-speaking-95552\/\">Rhema<\/a><\/strong>, a \u201csmart glasses\u201d interface that provides live, visual feedback on the speaker\u2019s volume and speaking rate; and, his newest system, <strong><a href=\"http:\/\/hoques.com\/Publications\/2016\/2016-IVA-Lissa-Razavi-et-al-2016.pdf\">LISSA<\/a><\/strong> (\u201cLive Interactive Social Skills Assistance\u201d), a virtual character resembling a college-age woman who can see, listen, and respond to users in a conversation. LISSA provides live and post-session feedback about the user\u2019s spoken and nonverbal behavior.<\/p>\n<figure id=\"attachment_224902\" aria-describedby=\"caption-attachment-224902\" style=\"width: 630px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-224902\" src=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/ROCSPEAK-1_photo-credit-Ehsan-Hoque-630x525.jpg\" alt=\"\" width=\"630\" height=\"525\" srcset=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/ROCSPEAK-1_photo-credit-Ehsan-Hoque-630x525.jpg 630w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/ROCSPEAK-1_photo-credit-Ehsan-Hoque-768x640.jpg 768w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/ROCSPEAK-1_photo-credit-Ehsan-Hoque-1024x853.jpg 1024w, https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/ROCSPEAK-1_photo-credit-Ehsan-Hoque.jpg 1152w\" sizes=\"auto, (max-width: 630px) 100vw, 630px\" \/><figcaption id=\"caption-attachment-224902\" class=\"wp-caption-text\"><em>Ehsan Hoque&#8217;s speech assistants analyze word choice, speaker volume, and body language. (University image \/ Ehsan Hoque)<\/em><\/figcaption><\/figure>\n<p>Hoque\u2019s systems differ from <strong><a href=\"#Luo\">Luo\u2019s social media algorithms <\/a><\/strong>or <strong><a href=\"#Howard\">Howard\u2019s natural language robot models<\/a><\/strong> in that people may use them in their own homes. Users then have the option of sharing for research purposes the data they receive from the systems. This method allows the algorithm to continuously progress\u2014the essence of machine learning.<\/p>\n<p>\u201cNew data constantly helps the algorithm improve,\u201d Hoque says. \u201cThis is of value for both parties because people benefit from the technology and while they\u2019re using it, they\u2019re helping the system get better by providing feedback.\u201d<\/p>\n<p>These systems have a wide-range of applications, including helping people to improve small talk, assisting individuals with Asperger Syndrome overcome social difficulties, helping doctors interact with patients more effectively, improving customer service training\u2014and aiding in public speaking.<\/p>\n<h2><strong>Can robots eventually mimic humans?<\/strong><\/h2>\n<p>This is a question that has long lurked in the public imagination. The 2014 movie <em>Ex Machina<\/em>, for example, portrays a programmer who is invited to administer the <strong><a href=\"#Turing\">Turing Test <\/a><\/strong>to a human-like robot named Ava. Similarly, the HBO television series <em>Westworld<\/em> depicts a Western-themed futuristic theme park populated with artificial intelligent beings that behave and emote like humans.<\/p>\n<p>Although Hoque is able to model human cognition and improve the ways in which machines and humans interact, developing machines to think in the same ways as human beings or that understand and display the emotional complexity of human beings is not a goal he aims to achieve.<\/p>\n<p>\u201cI want the computer to be my companion, to help make my job easier and give me feedback,\u201d he says. \u201cBut it should know its place.\u201d<\/p>\n<p>\u201cIf you have the option, get feedback from a real human. If that is not available, computers are there to help and give you feedback on certain aspects that humans will never be able to get at.\u201d<\/p>\n<p>Hoque cites smile intensity as an example. Through machine learning techniques, computers are able to determine the intensity of various facial expressions, whereas humans are adept at answering the question, \u2018How did that smile make me feel?\u2019<\/p>\n<p>\u201cI don\u2019t think we want computers to be there,\u201d Hoque says.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Machine learning provides computers with the ability to learn from labeled examples and observations of data. Researchers at Rochester are developing computer programs incorporating machine learning to teach robots and software to understand natural language and body language,  make predictions from social media, and model human cognition.<\/p>\n","protected":false},"author":912,"featured_media":225882,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[116],"tags":[41372,11716,18672,18802,19382,24962,29502,24202,18572,19232],"class_list":["post-224862","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-sci-tech","tag-big-data-2017","tag-data-science","tag-department-of-brain-and-cognitive-sciences","tag-department-of-computer-science","tag-department-of-electrical-and-computer-engineering","tag-ehsan-hoque","tag-featured-post-side","tag-jiebo-luo","tag-research-finding","tag-social-media"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Machine learning advances human-computer interaction<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Machine learning advances human-computer interaction\" \/>\n<meta property=\"og:description\" content=\"Machine learning provides computers with the ability to learn from labeled examples and observations of data. Researchers at Rochester are developing computer programs incorporating machine learning to teach robots and software to understand natural language and body language, make predictions from social media, and model human cognition.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\" \/>\n<meta property=\"og:site_name\" content=\"News Center\" \/>\n<meta property=\"article:published_time\" content=\"2017-03-10T19:56:07+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-10-29T19:14:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Lindsey Valich\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Lindsey Valich\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\"},\"author\":{\"name\":\"Lindsey Valich\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/fcd7d29a5b8e855924bf73b764dcd827\"},\"headline\":\"Machine learning advances human-computer interaction\",\"datePublished\":\"2017-03-10T19:56:07+00:00\",\"dateModified\":\"2024-10-29T19:14:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\"},\"wordCount\":2142,\"image\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg\",\"keywords\":[\"big-data-2017\",\"data science\",\"Department of Brain and Cognitive Sciences\",\"Department of Computer Science\",\"Department of Electrical and Computer Engineering\",\"Ehsan Hoque\",\"featured-post-side\",\"Jiebo Luo\",\"research finding\",\"social media\"],\"articleSection\":[\"Science &amp; Technology\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\",\"url\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\",\"name\":\"Machine learning advances human-computer interaction\",\"isPartOf\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg\",\"datePublished\":\"2017-03-10T19:56:07+00:00\",\"dateModified\":\"2024-10-29T19:14:35+00:00\",\"author\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/fcd7d29a5b8e855924bf73b764dcd827\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage\",\"url\":\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg\",\"contentUrl\":\"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg\",\"width\":1000,\"height\":600,\"caption\":\"man with robot holding a coffee cup in front of him\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.rochester.edu\/newscenter\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Machine learning advances human-computer interaction\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/#website\",\"url\":\"https:\/\/www.rochester.edu\/newscenter\/\",\"name\":\"News Center\",\"description\":\"University of Rochester\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.rochester.edu\/newscenter\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/fcd7d29a5b8e855924bf73b764dcd827\",\"name\":\"Lindsey Valich\",\"url\":\"https:\/\/www.rochester.edu\/newscenter\/author\/lvalich\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Machine learning advances human-computer interaction","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/","og_locale":"en_US","og_type":"article","og_title":"Machine learning advances human-computer interaction","og_description":"Machine learning provides computers with the ability to learn from labeled examples and observations of data. Researchers at Rochester are developing computer programs incorporating machine learning to teach robots and software to understand natural language and body language, make predictions from social media, and model human cognition.","og_url":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/","og_site_name":"News Center","article_published_time":"2017-03-10T19:56:07+00:00","article_modified_time":"2024-10-29T19:14:35+00:00","og_image":[{"url":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg","width":1000,"height":600,"type":"image\/jpeg"}],"author":"Lindsey Valich","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Lindsey Valich","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#article","isPartOf":{"@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/"},"author":{"name":"Lindsey Valich","@id":"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/fcd7d29a5b8e855924bf73b764dcd827"},"headline":"Machine learning advances human-computer interaction","datePublished":"2017-03-10T19:56:07+00:00","dateModified":"2024-10-29T19:14:35+00:00","mainEntityOfPage":{"@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/"},"wordCount":2142,"image":{"@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage"},"thumbnailUrl":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg","keywords":["big-data-2017","data science","Department of Brain and Cognitive Sciences","Department of Computer Science","Department of Electrical and Computer Engineering","Ehsan Hoque","featured-post-side","Jiebo Luo","research finding","social media"],"articleSection":["Science &amp; Technology"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/","url":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/","name":"Machine learning advances human-computer interaction","isPartOf":{"@id":"https:\/\/www.rochester.edu\/newscenter\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage"},"image":{"@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage"},"thumbnailUrl":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg","datePublished":"2017-03-10T19:56:07+00:00","dateModified":"2024-10-29T19:14:35+00:00","author":{"@id":"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/fcd7d29a5b8e855924bf73b764dcd827"},"breadcrumb":{"@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#primaryimage","url":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg","contentUrl":"https:\/\/www.rochester.edu\/newscenter\/wp-content\/uploads\/2017\/03\/robotHoward.jpg","width":1000,"height":600,"caption":"man with robot holding a coffee cup in front of him"},{"@type":"BreadcrumbList","@id":"https:\/\/www.rochester.edu\/newscenter\/machine-learning-advances-human-computer-interaction\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.rochester.edu\/newscenter\/"},{"@type":"ListItem","position":2,"name":"Machine learning advances human-computer interaction"}]},{"@type":"WebSite","@id":"https:\/\/www.rochester.edu\/newscenter\/#website","url":"https:\/\/www.rochester.edu\/newscenter\/","name":"News Center","description":"University of Rochester","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.rochester.edu\/newscenter\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.rochester.edu\/newscenter\/#\/schema\/person\/fcd7d29a5b8e855924bf73b764dcd827","name":"Lindsey Valich","url":"https:\/\/www.rochester.edu\/newscenter\/author\/lvalich\/"}]}},"_links":{"self":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts\/224862","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/users\/912"}],"replies":[{"embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/comments?post=224862"}],"version-history":[{"count":41,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts\/224862\/revisions"}],"predecessor-version":[{"id":626002,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/posts\/224862\/revisions\/626002"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/media\/225882"}],"wp:attachment":[{"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/media?parent=224862"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/categories?post=224862"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.rochester.edu\/newscenter\/wp-json\/wp\/v2\/tags?post=224862"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}