log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Defense: Advancing Object Understanding in Large Language Models
Mollie Shichman
Friday, October 17, 2025, 9:30-11:30 am
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

As Large Language Models (LLMs) pervade our world, developing their understanding of objects is imperative for success, especially when using LLMs in a high-stakes task with data and computational constraints, such as disaster relief missions. We propose methods for measuring and improving smaller, offline LLMs’ object reasoning capabilities. We develop an Affordance Ontology for describing various objects and their functionalities, as well as evaluation tasks for how well Masked Language Models can predict the object given a use case. We then introduce a pipeline for synthetically generating data that fine-tunes smaller LLMs to excel in reasoning about objects in disasters. We find that our pipeline excels in general object reasoning, but still struggles in reasoning about highly technical objects needed for disaster relief that require multiple steps to be used. We also have no comparison between our synthetic data and man-made data. We thus conduct a series of experiments dedicated to understanding how to best imbue knowledge into LLMs from different data sources. We use the proxy task of reasoning about fantasy role-playing games to test the benefits of Retrieval Augmented Generation (RAG) systems when compared to standard fine-tuning and fine-tuning using fantasy role-playing dialogues and our synthetic data. We find our synthetic data are more effective at both fine-tuning and RAG, while real-world data can boost an already strong RAG system. We also find that for the best performing models, complex object reasoning improved, but an open challenge remains in object reasoning beyond standard affordances.

 

Bio

Mollie Shichman is a 6th year Ph.D. student advised by Dr. Rachel Rudinger.
Since 2023, her research has been funded by the Army Research Laboratory under the supervision of Dr. Claire Bonial. Her research interests lie in exploring how Language Models understand and reason about physical common sense and object usage.

This talk is organized by Migo Gui