Despite recent advances in machine learning methods for robotics, existing learning-based approaches often lack sample efficiency, posing a significant challenge due to the considerable time required to collect real-robot data. In this talk, I will present our innovative methods that tackle this challenge by leveraging the inherent symmetries in the physical environment as an inductive bias in robot learning. Specifically, I will outline a comprehensive framework of equivariant policy learning—which exploits known symmetry properties to constrain and guide learning—and its application across various robotic problem settings, including reinforcement learning and behavior cloning. Our methods significantly outperform state-of-the-art baselines while achieving these results with far less data, both in simulation and in the real world. Furthermore, our approach demonstrates robustness in the presence of symmetry distortions, such as variations in camera angles, highlighting its potential for more reliable real-world deployment.