- You are subscribed to this talk through .
- You are watching this talk through .
- You are subscribed to this talk. (unsubscribe, watch)
- You are watching this talk. (unwatch, subscribe)
- You are not subscribed to this talk. (watch, subscribe)
We propose the first loss function for approximate Nash equilibria of normal-form games that is amenable to unbiased Monte Carlo estimation. This construction allows us to deploy standard non-convex stochastic optimization techniques for approximating Nash equilibria, resulting in novel algorithms with provable guarantees. We complement our theoretical analysis with experiments demonstrating that stochastic gradient descent can outperform previous state-of-the-art approaches.
I am a research scientist at Google DeepMind in London working on game theory and multiagent systems. I search for instances of games in the wild (e.g., Chat Games), problem settings exhibiting a conflict of agent incentives, as well as opportunities where game-theoretic techniques can be used to power broader AI models (e.g., EigenGame). In either case, we need powerful game solvers, which has been my longest running research focus (e.g., SGA [1,2], NE-as-Opt). I received a BS in Mech Eng and BS/MS in Applied Math from Northwestern (2011) as well as an MS/PhD in CS from Umass Amherst (2018).
Note: Please register using the Google Form on our website https://go.umd.edu/marl for access to the Google Meet, Open-source Multi-Agent AI Research Community and talk resources.