log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Sparsity: Challenge or Opportunity?
Bahar Asgari
https://umd.zoom.us/j/94543765116?pwd=clY3MVV5Z1g4T2xpdnJMdjFiMFhYdz09
Thursday, February 25, 2021, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Sparse problems – computer programs in which data lacks spatial locality in memory – are the main components in several application domains such as recommendation systems, computer vision, robotics, graph analytics, and scientific computing. Today, several computers and supercomputers containing millions of CPUs and GPUs are actively involved in executing sparse problems. Although sparse problems dominate, we have been designing our machines only for dense problems for such a long time. Because of the contradiction between the abilities of the hardware and the nature of the problems, even modern high-performance CPUs and GPUs and state-of-the-art domain-specific architectures are poorly suited to sparse problems, utilizing only a tiny fraction of their peak performance. 

 

In this talk, I present my research that provides solutions to resolve four main challenges that prevent sparse problems from efficiently achieving high performance on today’s computing platforms: computation underutilization, slow decompression, data dependencies, and irregular/inefficient memory accesses. In more detail, I focus on the last two challenges and illustrate how my research suggests converting mathematical dependencies into gate-level dependencies at the software level and exploiting dynamic partial reconfiguration at the hardware level, to execute sparse scientific problems more quickly than conventional architectures do. I also explain how my research deals with sparsity by using an intelligent reduction tree near memory to process data while gathering them from random locations of memory – neither where data reside nor where dense computations occur. In the end, I propose my plans for developing a novel approach to computing using intelligent dynamically reconfigurable computation platforms to envision the needs of data and algorithms in the future.

Bio

 

Bahar Asgari is a Ph.D. candidate in the School of Electrical and Computer Engineering at Georgia Tech. Her doctoral dissertation, in consultation with her advisors Professor Sudhakar Yalamanchili and Professor Hyesoon Kim, focuses on efficiently improving the execution performance of sparse problems. Her proposed hardware accelerators and hardware/software co-optimization solutions that deal with essential challenges of sparse problems contribute to widespread application domains from machine learning to high-performance scientific computing. Besides her dissertation research, Bahar has conducted research in collaboration with other research scientists and faculty at Georgia Tech as she believes that collaboration is key to innovation. Bahar’s research and collaborative work have appeared at top-tier computer architecture conferences including HPCA, ASPLOS, DAC, DATE, IISWC, ICCD, and DSN as well as high-impact journals. Bahar has been selected to participate in Rising Stars 2019, an intensive academic career workshop for women in EECS. Bahar’s personal website is https://baharasg.github.io/.

This talk is organized by Richa Mathur