log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Closing the User Expectations Gap: Interpretability and Cultural Extensions to NLP Models
Fenfei Guo
Remote
Friday, September 20, 2024, 9:00-11:00 am
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

https://umd.zoom.us/j/96571234197

Data-driven approaches, particularly large pre-trained language models (LLMs), have revolutionized natural language processing (NLP) by capturing semantics and memorizing complex structures from extensive unstructured text data, resulting in substantial improvements in NLP applications. However, data-driven models might not always reflect user needs and expectations due to several inherent limitations and gaps, such as bias in training data, lack of interpretability and transparency, over-reliance on common patterns, etc.
This proposal closes the gap between data-driven NLP models and user expectations, with a focus on improving interpretability and extending the models' ability to align with culture.

Bio

Fenfei Guo is a PhD student working with Prof. Jordan Boyd-Graber to close the gap between human expectations and data-driven NLP models. Her focus is on improving the models' interpretability and extending their applicability to cultural contexts.

This talk is organized by Migo Gui