Рет қаралды 214
Computer Science Seminar Series
March 12, 2024
“Foundations of Multisensory Artificial Intelligence”
Paul Liang, Carnegie Mellon University
Building multisensory AI systems that learn from multiple sensory inputs-such as text, speech, video, real-world sensors, wearable devices, and medical data-holds great promise for many scientific areas in terms of practical benefits, such as supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. In this talk, Paul Liang will discuss his research on the machine learning principles of multisensory intelligence, as well as practical methods for building multisensory foundation models over many modalities and tasks. In the first half of the seminar, Liang will present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems and their quantification enables users to understand multimodal datasets and design principled approaches to learn these interactions. In the second half of the seminar, Liang will present his work in cross-modal attention and the multimodal transformer architectures that now underpin many of today’s multimodal foundation models. Finally, he will discuss his collaborative efforts in scaling AI to many modalities and tasks for real-world impact on affective computing, mental health, and cancer prognosis.
Paul Liang is a PhD student in machine learning at Carnegie Mellon University, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. He studies the machine learning foundations of multisensory intelligence to design practical AI systems that integrate, learn from, and interact with a diverse range of real-world sensory modalities. His work has been applied in affective computing, mental health, pathology, and robotics. He is a recipient of the Siebel Scholars Award, the Waibel Presidential Fellowship, a Meta Research PhD Fellowship, the Center for Machine Learning and Health Fellowship and was named a Rising Star in data science. He has additionally received three Best Paper or Honorable Mention Awards at International Conference on Multimodal Interaction and Conference on Neural Information Processing Systems workshops. Outside of research, Liang received the Alan J. Perlis Graduate Student Teaching Award for instructing courses on multimodal machine learning and advising students around the world in directed research.