log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Interpretable Deep Learning for Time Series
Aya Ismail
Thursday, July 7, 2022, 11:00 am-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Time series data emerge in applications across many critical domains, including neuroscience, medicine, finance, economics, and meteorology. However, practitioners in such fields are hesitant to use Deep Neural Networks (DNNs) that can be difficult to interpret. For example, in clinical research, one might ask, "Why did you predict this person as more likely to develop Alzheimer's disease?". As a result, research efforts to improve the interpretability of deep neural networks have significantly increased in the last couple of years. Nevertheless, they are mainly applied to vision and language tasks, and their applications to time series data are relatively unexplored. My work aims to identify and address the limitations of interpretability of neural networks for time series data.

In the first part of my work, I extensively compare the performance of various interpretability methods across diverse neural architectures commonly used in time series in a new benchmark of synthetic time series data. I propose and report multiple metrics to empirically evaluate the performance of interpretability methods for detecting feature importance over time. I find that network architectures and saliency methods fail to reliably and accurately identify feature importance over time. For RNNs, saliency vanishes over time, biasing detection of salient features only to later time steps, and are, therefore, incapable of reliably detecting important features at arbitrary time intervals. At the same time, non-recurrent architectures fail due to the conflation of time and feature domains.

The second part of my work focuses on improving time series interpretability by enhancing saliency methods, training procedures, and neural architectures. First, I improve the quality of time series saliency maps by detangling time and feature importance through two-step temporal saliency rescaling (TSR). Then, I introduce a saliency guided training procedure for neural networks to reduce noisy gradients used in predictions. Next, I propose an inherently-interpretable framework, Interpretable Mixture of Experts (IME), that provides interpretability for structured data while preserving accuracy. Finally, I design a novel RNN cell structure (input-cell attention), preserving a direct gradient path from the input to the output at all timesteps. As a result, explanations produced by the input-cell attention RNN can detect important features regardless of their occurrence in time.

Examining Committee:
Chair:
Dean's Representative:
Members:
Dr. Soheil Feizi    
Dr. Joseph Jaja    
Dr. Hector Corrada Bravo    
Dr. Thomas Aaron Goldstein    
Dr. Sercan Arik (Google Cloud AI Research)
Bio

Aya Abdelsalam Ismail is a Ph.D. candidate in the Computer Science department working with Soheil Feizi and Héctor Corrada Bravo. Her research interests include interpretability of deep learning models, time series forecasting, and applying deep learning in neuroscience and health informatics.

This talk is organized by Tom Hurst