https://umd.zoom.us/j/
Data-driven approaches, particularly large pre-trained language models (LLMs), have revolutionized natural language processing (NLP) by capturing semantics and memorizing complex structures from extensive unstructured text data, resulting in substantial improvements in NLP applications. However, data-driven models might not always reflect user needs and expectations due to several inherent limitations and gaps, such as bias in training data, lack of interpretability and transparency, over-reliance on common patterns, etc.
This proposal closes the gap between data-driven NLP models and user expectations, with a focus on improving interpretability and extending the models' ability to align with culture.
Fenfei Guo is a PhD student working with Prof. Jordan Boyd-Graber to close the gap between human expectations and data-driven NLP models. Her focus is on improving the models' interpretability and extending their applicability to cultural contexts.