log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Motion Modeling from Human to the World Around Us
Ting-Hsuan Liao
Wednesday, October 1, 2025, 10:00 am-1:30 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Motion is a fundamental aspect of how humans and animals interact with the world, making motion modeling central to understanding dynamic phenomena. With recent advances in deep learning, research on motion modeling has rapidly advanced, spanning both reconstruction from real-world observations and the generation of new motions.

This proposal investigates how to model realistic motion conditioned on context. We first propose ShapeMove, a text-driven human motion synthesis framework that explicitly incorporates body shape. Unlike prior methods that assume a canonical body model, ShapeMove predicts both motion and body shape parameters, enabling the synthesis of shape-aware motions that capture how the same action is performed differently across diverse body morphologies. Next, we introduce PAD3R, a framework for reconstructing dynamic, arbitrary objects from casual monocular videos without relying on predefined templates. PAD3R disentangles object motion from camera motion and leverages long-term point tracking to regularize non-rigid deformations, enabling template-free dynamic 3D reconstruction directly from video.

Together, these contributions bridge generation and reconstruction under a unified lens of context, advancing controllable motion modeling across humans and general dynamic objects.

Bio

Ting-Hsuan Liao is a 4th-year PhD student advised by Prof. Jia-Bin Huang. She has interned at Intel Lab and Adobe Research. Her research primarily focuses on 3D/4D computer vision.

This talk is organized by Migo Gui