Skip to content

Current CoE-funded Projects

The CoE proudly sponsors the research of faculty and partner companies. The Center’s funding allows researchers to produce innovative and visionary approaches to data science in a variety of areas and industries. We currently are funding these partnerships and are looking forward to seeing the results of the research and the positive economic impact that it will have on New York State and the world of data science.

Currently Funded Research Collaborations


Generative Models for Audio Processing

Mark Bocko Photo Standing Outside in front of GlassPI Researcher: Mark Bocko

This research will support new Rochester startup, Obscure Signals, that is developing AI-based tools for the preservation and restoration of historical audio recordings. Inference of the signal processing steps employed in historical recording methods currently is a time-consuming process of experimentation and expert listening assessment. Historical recording methods are also intrinsically lossy, for example the original audio signal may be compressed. Thus there is a need to employ generative AI models in the restoration process. The research that will be conducted in this project will explore the use of “neural optimal transport” for such generative AI tools.

Development of a Low-Cost, Low-Power Integrated Machine Health Monitoring Sensor

PI Researcher: Michael Heilemann

This project, in conjunction with ADVIS Inc. is focused on developing a device to cost-effectively bring machine health monitoring to a broad spectrum of Department of Defense (DoD) assets (vehicles, pumps, etc.), where the implementation of conventional monitoring systems is cost prohibitive. To meet size (~1 in.3) and power consumption (battery life of ~3 years) requirements, the device utilizes low-power embedded machine learning (ML) models trained on data acquired by a vibration sensor. Spectral features extracted from a recorded signal are used to train an embedded ML model to perform tasks such as the detection of anomalies and faults in mechanical systems.

  Leverage Large Language Models for Complex Robot Manipulation

PI Researcher: Chenliang Xu

In robot manipulation, adapting to dynamic environments with flexible task specifications is challenging. Language-based vision manipulation systems offer a solution by linking language instructions to visual data and generating actions. However, current approaches often develop vision models and action policies separately, leading to poor integration. To address this, we propose ACTLLM, a method that unifies visual interpretation and policy learning using large language models (LLMs). By generating structured scene descriptions and incorporating an action consistency loss, ACTLLM is expected to enhance the fusion of visual and policy elements, facilitating the efficient execution of complex tasks within a multi-turn visual dialogue framework.