Prompt-based adaptation has become the dominant way large language models (LLMs) are deployed, through methods such as in-context learning, prompt engineering, and lightweight fine-tuning. This dissertation studies a central weakness of that interface: model behavior is often shaped not only by what a prompt says, but also by fragile structural choices such as where information is placed, how instructions are phrased, and what form of supervision is provided. I argue that robustness to these factors is a missing requirement for reliable LLM adaptation.
The first part of this work identifies a new source of instability in in-context learning: the position of demonstrations within a prompt. I show that moving the same demonstrations across prompt locations can substantially change model predictions, revealing prompt structure as an important but underexamined axis of variation. The second part turns this fragility into an advantage through AutoDPP, a lightweight routing method that predicts the best prompt layout for each query without requiring additional LLM calls. This shows that prompt sensitivity can be modeled and exploited rather than simply averaged away.
The final part asks whether prompt fragility can be addressed at deeper levels of the adaptation stack. I investigate activation steering as a possible alternative to prompt-level control, propose Sheaf Preference Optimization to enforce consistency across semantically equivalent prompt rephrasings, and develop adaptive preference escalation methods for cases where simple pairwise feedback becomes uninformative. Together, these contributions present a unified view of prompt-based adaptation as a robustness problem spanning prompting, routing, and preference optimization.
Kwesi Cobbina is a fifth-year Ph.D. student in Computer Science at the University of Maryland, College Park, advised by Prof. Tianyi Zhou. His research focuses on efficient post-training adaptation of large language models, with particular emphasis on prompting, test-time representation control, and preference optimization.

