log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Defense: Dynamical Memory in Deep Neural Networks - A Pathway Towards Embodied Connectionist Systems
Matthew Evanusa
IRB-4105
Friday, June 28, 2024, 1:00-3:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

https://umd.zoom.us/j/4081568746?omn=92769871162

"Deep learning" has been a transformative force in the last decade, with AI becoming a household name and companies aiding it topping the stock market. However, its success rests on the same foundation that the Perceptron rested almost 70 years ago: that of I.I.D. time-less data, and a function-mapper (the deep neural network, versus the Perceptron) mapping this data to target outputs. How does this specialization match up with our intuitive views about what "artificial intelligence" means? Moving along a separate track inspired by the early days of AI and cybernetics, I ask, what is the next step to build agents that think and interact in the world? From the connectionist viewpoint, what network topologies are we looking for? What does it mean for neural networks to generate persistent, coherent embodied experiences, in addition to "intelligence"?

 

A key missing element, I will argue, that separates artificial neural networks from biological ones is the element of a temporally coherent memory that carries through time across the same temporal "strands". This memory in time, which I refer to as sequence memory, is a constant persistent experience and endows us and other intelligent life with a stable state, with which we gain not only our sense of self, but our sense of consciousness as well. This persistent state is currently absent in modern deep learning, which focuses entirely on function mapping. To this end, inspired via multiple convergent channels - from control theory, Frank Rosenblatt's early schematics, reservoir computing, neuroscience, and our lab's focus on embodied perception - I propose a new hybrid approach for deep learning - The Maelstrom Network, a modular neural network that combines an "unhooked" memory unit in combination with deep learning controllers and motor output. I propose this as one potential first step towards giving new artificial agents a sense of embodiment, a persistent state in memory, with the goal of moving towards agents truly "experiencing" persistent time. This has the advantage of not throwing away the deep learning advances thus far, as deep learning networks comprise part of the modules - but augments them with new temporal components. In this defense, I will demonstrate my prior work that led me to this point, the prior work in the field for the last 70 years that focuses on memory in neural networks, my adventures through multiple domains to get here, and my vision for what future iterations of this could mean for the field of artificial intelligence: how this can merge into ideas about an executive system, and potential future capabilities ranging from lifelong learning agents to endowing truly phenomenologically "alive" artificial intelligence.

Bio

Matthew Evanusa is a PhD candidate in Computer Science studying under Yiannis Aloimonos in the PRG lab. His interests center on reverse-engineering the key principles of embodied biological neural networks, memory and its intersection with deep learning, and how these concepts can inform the next generation of embodied, temporal, continual neural networks that operate beyond just pattern recognition. He is a fellow of the NSF COMBINE program, which studies biological networks across scales. He is also a researcher at the US Naval Research laboratory in the Signals division. After graduation, he will continue to research and develop his ideas presented here at the Naval Lab. In his other life, Matthew is an avid musician, conducting for the UMD Gamer Symphony Orchestra from 2018 to 2020, as well as performing on cello and piano, singing in the choir, and arranging pieces.  

This talk is organized by Migo Gui