log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Hello from Adobe and Temporal effects on NLP models
Wednesday, October 20, 2021, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

I will briefly introduce my work at Adobe Research, then talk about recent work with my student Oshin Agarwal.

 

How does the performance of models trained to perform language-related tasks change over time? This is a question of great practical importance and few definitive findings. I will present a set of experiments with systems powered by large neural pretrained representations for English to demonstrate that temporal model deterioration is not as big a concern for non-specialized domains, with some models in fact improving when tested on data drawn from later time periods. It is however the case that temporal domain adaptation is beneficial, with better performance for a given time period possible when the system is trained on temporally more recent data. 

 

Zoom: https://umd.zoom.us/j/98806584197?pwd=SXBWOHE1cU9adFFKUmN2UVlwUEJXdz09

Bio

Ani Nenkova is a principal scientist at Adobe Research, where she leads the Adobe-Maryland Document Intelligence Lab. Ani's research spans a number of topics in language technology: Ani and her collaborators are recipients of the best student paper award at SIGDial in 2010, the best paper award at EMNLP-CoNLL in 2012 and  the best student-led paper at the AIMA Summit in 2021. Ani serves as the editor-in-chief for the Transactions of the Association for Computational Linguistics.

This talk is organized by Wei Ai