log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
A theory of appropriateness with applications to generative artificial intelligence
Tuesday, November 7, 2023, 4:00-5:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Registration requested: The organizer of this talk requests that you register if you are planning to attend. There are two ways to register: (1) You can create an account on this site (click the "register" link in the upper-right corner) and then register for this talk; or (2) You can enter your details below and click the "Register for talk" button. Either way, you can always cancel your registration later.

Name:
Email:
Organization:

Abstract

Appropriateness is governed by implicit and explicit norms, as well as by conventions, relationships, and emotions. Recently, with the advent of artificial intelligence (AI) systems capable of conversing in natural language, a new research problem of practical importance has emerged: that of ensuring AI agents act and speak appropriately in all contexts where they may operate. Appropriateness for a search engine is not the same as appropriateness for a comedy writing assistant. Norms, which we characterize as learned behavioral standards supported by collective patterns of sanctioning, are powerful forces in shaping behavior, both of individuals and of groups like corporations and nation states. The content of our norms is at least partly arbitrary since many norms have no material consequences, e.g. what color clothing it is appropriate to wear to a funeral. Many norms are deeply context-dependent and difficult to articulate precisely in language. Some norms remain entrenched for centuries while other norms shift rapidly, featuring strong positive feedback cascade dynamics (e.g. smoking cigarettes in public). We explore here whether an approach based on learning context-dependent social norms from sanctioning, as people do, may provide a route to building models capable of adapting on the fly to new or changing context, and whether this approach may help in producing AI capable of acting and speaking appropriately in more diverse and more niche settings.

Bio

Joel Z. Leibo is a senior staff research scientist at Google DeepMind. His research is concerned with studying cooperation in both humans and machines, and with evaluating AI capabilities. He is interested in reverse engineering human biological and cultural evolution to inform the development of multi-agent artificial intelligence that is simultaneously human-like and human-compatible.

 

Note: Please register using the Google Form on our website https://go.umd.edu/marl for access to the Google Meet and talk resources.

This talk is organized by Saptarashmi Bandyopadhyay