log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Challenges in End-to-End Generation
Alexander Rush - Harvard University
Wednesday, April 25, 2018, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Progress in NMT has led to optimism for text generation tasks such as summarization and dialogue, but it has been more difficult to quantify the successes and challenges in this space. In this talk, I will survey some of the recent advances in neural NLG, and present a successful implementation of these techniques for the 2017 E2E NLG challenge (Gehrmann et al, 2018). Despite success on these small scale examples, though, we see that similar models fail to scale to a more realistic data-to-document corpus. Analysis shows systems will need further improvements in discourse modeling, reference, and referring expression generation (Wiseman et al, 2017). Finally, I will end by presenting recent work in unsupervised NLG that shows promising results in neural style transfer using a continuous GAN-based text autoencoder (Zhao et al 2017). 
 

 

Bio

Alexander "Sasha" Rush is an assistant professor at Harvard University. His research interest is in ML methods for NLP with recent focus on deep learning for text generation including applications in machine translation, data and document summarization, and diagram-to-text generation, as well as the development of the OpenNMT translation system. His past work focused on structured prediction and combinatorial optimization for NLP. Sasha received his PhD from MIT supervised by Michael Collins and was a postdoc at Facebook NY under Yann LeCun. His work has received four research awards at major NLP conferences. 

This talk is organized by Marine Carpuat