Artificial intelligence (AI) systems are increasingly opaque, including to their creators. For example, when language systems, in particular large language models, generate claims that are easily verified by people to be untrue, trust is irrelevant. What matters is when they generate claims that are not (easily) verified to be true. I will discuss how people use and trust systems in such contexts, and how the systems can be improved to increase their trustworthiness. I'll also discuss various initiatives on campus at UMD related to trustworthy AI, such as the Institute for Trustworthy AI in Law Society (TRAILS) and the new Artificial Intelligence Interdisciplinary Institute at Maryland (AIM).
Hal Daumé III is a professor of computer science with appointments in the Maryland Language Science Center and the University of Maryland Institute for Advanced Computer Studies, where he is also the director of TRAILS and AIM. In addition to fairness and natural processing, his research focuses on understanding computational properties of learning and languages as well as trustworthy AI. Daumé has received numerous accolades, including best paper at the 2022 American Association for Corpus Linguistics Conference; 2018 Annual Meeting of the Association for Computational Linguistics (ACL); 2016 Annual Conference of the North American Chapter of the Association for Computational Linguistics; 2011 Annual Collaboration, Electronic Messaging, Anti-Abuse and Spam Conference; and 2009 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases; a Test of Time award at ACL 2022; and best demo at the 2015 Conference on Neural Information Processing Systems.