log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Ailments of Alignment: Hurdles in Adapting Large Language Models to Human Demands
Wednesday, April 26, 2023, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

We are in the midst of an extraordinary arms race for building larger, more powerful chatbots that are adapted (“aligned”) to follow human demands by producing appropriate responses to the articulations of human commands. It is exciting to see how far these chatbots can be pushed via scaling large language models (LLMs) and various design parameters. However, since this rapid progress is motivated by market competition pressure, the underlying design choices that enable these models are often opaque, rendering them prone to many issues and inefficiencies. In this talk, I am planning to go over issues regarding aligning LLMs to follow human commands and provide partial results. I will explore issues related to the size and diversity of human feedback, algorithm selection, and other concerns regarding the future of this trajectory. My objective is to raise a broad set of questions that indirectly emphasize the importance of academic research in this new era of AI.

Bio

Daniel Khashabi is an assistant professor in computer science at Johns Hopkins University and a member of the Center for Language and Speech Processing (CLSP). He is enthusiastic about creating computational models that enable reliable and transparent communication with humans, with the aim of aiding and enhancing their cognitive abilities. Before joining Johns Hopkins, he was a postdoctoral fellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019.

This talk is organized by Rachel Rudinger