https://umd.zoom.us/my/
Algorithms are increasingly used to create markets, discover and disseminate information, incentivize behaviors, and inform real-world decision-making in a variety of socially consequential domains. In such settings, algorithms have the potential to improve aggregate utility by leveraging previously acquired knowledge, reducing transaction costs, and facilitating the efficient allocation of resources, broadly construed. However, ensuring that the distribution over outcomes induced by algorithmic decision-making renders the broader system sustainable---i.e., by preserving rationality of participation for a diverse set of stakeholders, and identifying and mitigating the costs associated with unevenly distributed harms---remains challenging.
One set of challenges arises during algorithm or model development: here, we must decide how to operationalize sociotechnical constructs of interest, induce prosocial behavior, balance uncertainty-reducing exploration and reward-maximizing exploitation, and incorporate domain-specific preferences and constraints. Common desiderata such as individual or subgroup fairness, cooperation, or risk mitigation often resist uncontested analytic expression, induce combinatorial relations, or are at odds with unconstrained optimization objectives and must be carefully incorporated or approximated so as to preserve utility and tractability. Another set of challenges arises during model evaluation: here, we must contend with small sample sizes and high variance when estimating performance for intersectional subgroups of interest, and determine whether observed performance on domain-specific reasoning tasks may be upwardly biased due to annotation artifacts or data contamination.
In this thesis, we propose algorithms and evaluation methods to address these challenges and show how our methods can be applied to improve algorithmic acceptability and decision-making in the face of uncertainty in public health and conversational recommendation systems. Our core contributions include: (1) novel resource allocation algorithms to incorporate prosocial constraints while preserving utility in the restless bandit setting; (2) model evaluation techniques to inform harms identification and mitigation efforts; and (3) prompt-based interventions and meta-policy learning strategies to improve expected utility by encouraging context-aware uncertainty reduction in large language model (LLM)-based recommendation systems.
Christine Herlihy is a CS PhD candidate at the University of Maryland, College Park, where she is advised by John P. Dickerson. Her research interests include sequential decision‑making under uncertainty, algorithmic fairness, knowledge representation and reasoning, and healthcare. During her PhD, she has interned at Amazon Robotics, Google Research, and Microsoft Research. Prior to UMD, she earned her MS from Georgia Tech and completed her undergraduate studies at Georgetown University.