Programming languages define the abstractions through which we build software, while their implementations determine how those abstractions behave in practice. Bridging this gap has become increasingly challenging: modern languages emphasize higher-order features, rich type systems, and strong safety guarantees, yet software must still run efficiently across diverse and evolving hardware platforms. Because language implementations form a foundational part of the software stack, subtle bugs or miscompilations can compromise the correctness of entire systems, creating strong incentives for implementations that are not only effective but also amenable to formal reasoning and verification. These demands call for new techniques that support expressive abstractions while enabling precise, tractable analyses to guide optimization and ensure correctness.
In this talk, I present a general framework for improving how language implementations analyze and optimize programs with first-class functions. At a high level, the framework uses static typing information already present in the program to better understand how values and functions flow through a program, allowing the compiler to make informed decisions about transforming program structure and value representations. This framework enables traditional compiler optimizations to be generalized to first-class functions, resulting in more uniform and complete optimization pipelines for higher-order programs. I also present an application of this framework that additionally incorporates aliasing information that allows us to eliminate unnecessary allocations for function objects. Finally, I demonstrate how this framework can be repurposed for random program generation, yielding programs that reliably exercise their internal computations and thus strengthen both property-based and differential testing of compilers. I conclude by outlining ongoing work on a machine-verified implementation of this framework in the CertiCoq compiler, along with directions for future research.

