log in  |  register  |  feedback?  |  help  |  web accessibility
From Sparse Patterns to Smart Acceleration: Machine Learning Methods for the Future of Computing
Bahar Asgari
IRB 0318 (Gannon) or https://umd.zoom.us/j/93754397716?pwd=GuzthRJybpRS8HOidKRoXWcFV7sC4c.1
Friday, December 12, 2025, 11:00 am-12:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Sparse matrix–matrix multiplication underpins domains such as scientific computing, graph analytics, and machine learning, but remains a double-edged sword: creating opportunities for acceleration while posing significant challenges for accelerator design. Additionally, as modern sparse workloads grow increasingly heterogeneous, static sparse accelerator designs struggle to sustain high performance across diverse characteristics. Reconfigurable computing offers a promising path by enabling hardware to adapt its dataflows and resource allocation at runtime, but doing so effectively requires principled methods beyond simple heuristics. This talk introduces two complementary approaches that advance both the efficiency and adaptability of sparse accelerators. First, Boötes will be introduced as a spectral-clustering–based technique that reorders sparse matrices to reduce off-chip memory traffic during row-wise multiplication, aligning data access patterns with operand reuse to deliver performance gains across multiple state-of-the-art accelerator architectures while significantly lowering preprocessing costs. Then, Misam will be introduced as a machine-learning–assisted framework that dynamically selects optimal dataflows for sparse matrix multiplication, overcoming the rigidity of fixed designs and achieving substantial speedups with minimal FPGA reconfiguration overhead. Together, these approaches illustrate how combining adaptive machine learning strategies with algorithmic data reordering paves the way for the next generation of versatile and efficient sparse accelerators.

Bio

Bahar Asgari is an assistant professor in the Department of Computer Science at the University of Maryland, College Park (UMD), with a joint appointment at UMIACS, an affiliation with the Department of Electrical and Computer Engineering at UMD, and an affiliation with the Artificial Intelligence Interdisciplinary Institute at Maryland (AIM). Prior to joining UMD, she spent a year on Google's Systems and Services Infrastructure team, where she focused on improving the performance of Google's systems. She earned her Ph.D. in Electrical and Computer Engineering from Georgia Tech in 2021. Asgari is a recipient of the DoE Early Career Award in 2023 and received the Teaching Excellence Award in the Department of Computer Science at UMD in 2024. She was also selected as a Rising Star in EECS in 2019. Her research group at UMD, the Computer Architecture and Systems Lab (CASL) is dedicated to shaping the future of computing, with a primary goal of enabling intelligent, dynamically reconfigurable architectures.

This talk is organized by Samuel Malede Zewdu