log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Extracting Symbolic Representations Learned by Neural Networks
Thuan Huynh - University of Maryland, College Park
Tuesday, May 1, 2012, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

THE DISSERTATION DEFENSE FOR THE DEGREE OF Ph.D. IN COMPUTER SCIENCE FOR

                                         Thuan Huynh

Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are not obviously understandable. This difficulty has inspired substantial past research on how to extract symbolic, human-readable representations from a trained neural network, but the results obtained so far are very limited (e.g., large rule sets produced). This problem occurs in part due to the distributed hidden layer representation created during learning. Most past symbolic knowledge extraction algorithms have focused on progressively more sophisticated ways to cluster this distributed representation. In contrast, in this dissertation, I take a different approach. I develop ways to alter the error backpropagation neural network training process itself so that it creates a representation of what has been learned in the hidden layer activation space that is more amenable to existing symbolic representation extraction methods.

In this context, this dissertation research makes four main contributions. First, modifications to the backpropagation learning procedure are derived mathematically, and it is shown that these modifications can be accomplished as local computations. Second, the effectiveness of the modified learning procedure for feedforward networks is established by showing that, on a set of benchmark tasks, it produces rule sets that are substantially simpler than those produced by standard backpropagation learning. Third, this approach is extended to simple recurrent networks, and experimental evaluation shows remarkable reduction in the sizes of the finite state machines extracted from the recurrent networks trained using this approach. Finally, this method is further modified to work on echo state networks, and computational experiments again show significant improvement in finite state machine extraction from these networks. These results clearly establish that principled modification of error backpropagation so that it constructs a better separated hidden layer representation is an effective way to improve contemporary symbolic extraction methods.

Examining Committee:

Committee Chair:                       Dr. James A. Reggia

Dean’s Representative:              Dr. Juan Uriagereka

Committee Members:                Dr. Lise Getoor

                                                Dr. David W. Jacobs

                                                Dr. Dana Nau

                                       
EVERYONE IS INVITED TO ATTEND THE PRESENTATIVE PORTION OF THIS DEFENSE
This talk is organized by Jeff Foster