CS Colloquium | Learning Symbols for Trustworthy AI
- This event has passed.
Venue: AC-04-LR-002
Abstract: Recent advances in deep learning have led to novel AI-based solutions to challenging computational problems. Yet, the state-of-the-art models do not provide reliable explanations of how they make decisions, and can make occasional mistakes on even simple problems. The resulting lack of assurance and trust are obstacles to their adoption in safety-critical applications. Neurosymbolic learning architectures aim to address this challenge by bridging the complementary worlds of deep learning and logical reasoning via explicit symbolic representations. In this talk, I will describe representative neurosymbolic systems, and how they enable more accurate, interpretable, and domain-aware solutions to problems in healthcare and robotics.
About the Speaker: Rajeev Alur is Zisman Family Professor of Computer and Information Science and the Founding Director of ASSET Center for Trustworthy AI at University of Pennsylvania. He obtained his bachelor's degree from IIT Kanpur and PhD from Stanford University. His research is focused on principles and tools for design and analysis of safe and trustworthy systems. Notable awards include the inaugural CAV (Computer-Aided Verification) award, the inaugural Alonzo Church award for contributions to logic and computation, IIT Kanpur Distinguished Alumnus Award, and the Knuth Prize for contributions to theoretical computer science.
We look forward to your active participation.
