log in  |  register  |  feedback?  |  help  |  web accessibility
Tracking progress in Style Transfer: From Human to Automatic Evaluation + Adapting NLP Models through User Annotation and Feedback
Eleftheria Briakou and Michelle Yuan - UMD
Wednesday, December 1, 2021, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

This week we will have two presentations from CLIP Lab members. Please see below for their abstracts.

Zoom: https://umd.zoom.us/j/98806584197?pwd=SXBWOHE1cU9adFFKUmN2UVlwUEJXdz09

Eleftheria Briakou on "Tracking progress in Style Transfer: From Human to Automatic Evaluation"

Abstract: While the field of style transfer (ST) has been growing rapidly, it has been hampered by a lack of standardized practices for both human and automatic evaluation. In this talk, we will first summarize human evaluation practices described in 97 style transfer papers with respect to three main evaluation aspects: style transfer, meaning preservation, and fluency.  As we will see,  protocols for human evaluations in ST are often underspecified and not standardized, which hampers the reproducibility of research in this field and progress toward better human and automatic evaluation methods. Then, we will switch gears and discuss issues in automatic evaluation of ST. Concretely, taking formality as a case study, we will revisit several metrics for automatic evaluation of each of the three ST aspects and finally outline best practices that correlate well with human judgments and are robust across languages.

Michelle Yuan on "Adapting NLP Models through User Annotation and Feedback"

Abstract: NLP models are pre-trained on extensive amount of resources for learning general knowledge about language. However, these models need additional data and training to adapt them for particular tasks and domains. My research looks at how user feedback and labeling could help transfer models. In my talk, I will discuss two of my recent research projects. First, I will talk about active learning for coreference resolution, a task that involves identifying entity mentions in text that refer to each other. To reduce number of annotations, I explore strategies for labeling spans to help adapt models to new domains. Second, I will discuss a new project involving information triage through question answering.

Eleftheria is a fourth-year Ph.D. student in the Department of Computer Science at the University of Maryland, College Park. She is a member of the CLIP lab advised by Marine Carpuat. Eleftheria's research interests span various NLP fields such as computational semantics, machine translation, style transfer, crowdsourcing, generation evaluation and metrics, among others. Eleftheria's Ph.D. work focuses on detecting differences in meaning across languages and explores how they question common assumptions related to using data when developing NLP technology.

Michelle Yuan is a fifth-year PhD student advised by Jordan Boyd-Graber and a member of CLIP lab. She has worked on problems like topic modeling, cross-lingual classification, and active learning. 
This talk is organized by Wei Ai