In recent decades, architectural advances have brought parallelism to the mainstream. However, due to a variety of performance and correctness issues in practice, developing parallel software remains difficult—even for experts. This difficulty is exacerbated by the fact that mainstream languages are designed for sequential execution by default, and do not provide strong guarantees on safety and performance for parallel programs.
To address the difficulty of parallel programming, my research puts parallelism first: we assume parallel execution by default, and rethink fundamental abstractions from the ground up to provide guarantees on both safety and performance. In this talk, I highlight two contributions in particular: (1) disentanglement, which enables provably efficient parallel garbage collection, and (2) automatic parallelism management, which provides a solution to the long-standing granularity control problem. All of this work is implemented in MaPLe: an open-source compiler and run-time system that we built from the ground up for provably efficient and safe parallel programming. MaPLe is currently being used at Carnegie Mellon University to help teach parallel programming to over 500 students every year, and our empirical results show that MaPLe can compete with the performance of low-level hand-optimized code written in languages such as C/C++. To conclude, I discuss my future research plans, working towards making it simpler and safer to develop high-performance parallel software.
Sam Westrick is a post-doc at Carnegie Mellon University, working with Umut Acar on provably efficient and practical implementation techniques for parallel programming languages. He received his PhD from Carnegie Mellon in 2022, and he is the lead developer of the MaPLe compiler for Parallel ML. His work has been recognized with multiple distinguished paper awards, and in 2023 he received the ACM SIGPLAN Dissertation Award for his work on Efficient and Scalable Parallel Functional Programming Through Disentanglement.