We are witnessing a paradigm shift in AI, transitioning from deep learning models to the era of Large Language Models. This shift signifies a transformative advancement in AI, enabling it to be applied to diverse real-world safety-critical applications. Despite these impressive achievements, a fundamental question remains: are LLMs truly ready for safe, and secure use?
In this talk, I will show how my research embeds a computer security mindset to answer the above question. To understand and build secure and safe Large Language Models (LLMs), my talk will go through two core system perspectives including (1) investigating the lifecycle of LLMs and (2) analyzing information flow of LLMs in the agentic system. I will discuss the way to develop principled red-teaming frameworks to systematically evaluate LLM safety. I will highlight why model-level alone is insufficient for securing LLMs and introduce the security vulnerabilities from the system perspective, as well as presenting the principled defense solutions for secure LLMs.
Chaowei Xiao is an Assistant Professor at the University of Wisconsin–Madison. His research focuses on building secure and trustworthy AI systems. He has received several prestigious awards, including the Schmidt Science AI2050 Early Career Award, the Impact Award from Argonne National Laboratory, and various industry faculty awards. His work has been recognized with best paper awards including the USENIX Security Distinguished Paper Award (2024), ACM Gordon Bell Prize Finalist (2024), ACM Gordon Bell Special Prize for HPC-Based COVID-19 (2023), the Best Paper Award at the International Conference on Embedded Wireless Systems and Networks (EWSN) (2021), and the MobiCom Best Paper Award (2014).
Dr. Xiao's research has been cited over 14,000 times according to Google Scholar and has been featured in multiple media outlets such as Nature, Wired, Fortune, and The New York Times. Additionally, one of his research outputs was exhibited at the London Science Museum.