Graph neural networks have emerged as a potent class of neural networks capable of using the connectivity and structure of real-world graphs to learn intricate properties and relationships between nodes. Many real-world graphs exceed the memory capacity of a single GPU due to their sheer size, and using GNNs on them requires techniques such as mini-batch sampling to scale. However, this can lead to reduced accuracy in some cases, and sampling and data transfer from the CPU to the GPU can also slow down training. Distributed full-graph training on the other hand suffers from high communication overhead and load imbalance due to the irregular structure of graphs. This thesis proposes Plexus, a three-dimensional (3D) parallel approach for full-graph training that tackles these issues and can scale to billion-edge graphs. Additionally, it introduces optimizations such as a permutation scheme for load balancing, and a performance model to predict the optimal 3D configuration. Plexus is evaluated on a wide variety of graph datasets and scaling results are shown for up to 2048 GPUs on Perlmutter, which is 33% of the supercomputer, and 2048 GCDs on Frontier. Plexus achieves unprecedented speedups of 2.3x-6x over existing methods and a reduction in the time to solution by 5.2-8.7x on Perlmutter and 7-54.2x on Frontier.
Aditya Ranjan is a graduating M.S. student in Computer Science at the University of Maryland, College Park, advised by Professor Abhinav Bhatele. He earned his B.S. in Computer Science from the same institution in Fall 2023, where he was a President’s Scholar and received an honorable mention for the CRA Outstanding Undergraduate Researcher Award. As a member of the Parallel Software and Systems Group, his research centers on distributed deep learning, irregular applications, and performance analysis tools. He was also part of a team recognized as a finalist for the ACM Gordon Bell Prize.