Imagine two people living in different parts of the world. Wouldn’t it be amazing if they could communicate and interact with each other as if they were co-present in the same room? Enabling such an experience virtually, i.e., building a Codec Telepresence system that is indistinguishable from reality is one of the goals of Reality Labs Research (RL-R) in Pittsburgh. To this end, we develop key technology that combines fundamental computer vision, machine learning, and graphics research based on a novel neural reconstruction and rendering paradigm. In this talk, I will explain what a Codec Telepresence system is, how it works, as well as cover recent research advances towards achieving our goal. In the future, this system will bring the world closer together by enabling anybody to communicate and interact with anyone, anywhere, at any time, as if everyone were sharing the same physical space.
Michael Zollhoefer is a Director at Reality Labs Research (RL-R) in Pittsburgh leading a group of six research and engineering teams. His group is focused on building the technology that is required to develop a Codec Telepresence system that is indistinguishable from reality. Achieving this goal requires building first-of-its-kind multi-view capture systems, complex pilot captures for data collection, as well as cutting-edge research on neural representations for avatars, audio, and spaces. Before joining RL-R, Michael was a Visiting Assistant Professor at Stanford University and a Postdoctoral Researcher at the Max Planck Institute for Informatics. He received his PhD from the University of Erlangen-Nuremberg for his work on real-time reconstruction of static and dynamic scenes.