log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Better Learning and Inference with Dependency Networks
Monday, January 28, 2013, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Bayesian and Markov networks have been widely successful for
learning and reasoning in domains with uncertainty, but each has
limitations.  Dependency networks are an alternative graphical
representation with more flexibility than Bayesian networks and more
efficient learning methods than Markov networks.  The disadvantages of
dependency networks are that they may represent inconsistent
probability distributions and few inference algorithms are applicable.
In this talk, I will show how we can improve the utility of dependency
networks with new learning and inference algorithms.  First, I will
show how mean field inference can be a faster alternative to Gibbs
sampling, even in inconsistent dependency networks.  Second, I will
show how dependency networks can be used to learn better Markov
networks in less time, compared to several state-of-the-art methods.
Finally, I will introduce a new method for directly converting an
inconsistent dependency network into a consistent Markov network.
 
Based on joint work with Jesse Davis and Arash Shamaei.
 
 
Bio
Daniel Lowd is an Assistant Professor in the Department of Computer
and Information Science at the University of Oregon.  His research
interests include learning and inference with probabilistic graphical
models, adversarial machine learning, and statistical relational
machine learning.  He maintains Libra, an open-source toolkit for
Learning and Inference in Bayesian networks, Random fields, and
Arithmetic circuits.
This talk is organized by Lise Getoor