Large language models (LLMs) are being rapidly integrated into personal, educational, business, and even governmental workflows, where they are increasingly being treated as "collaborators" with humans. While generative AI has been sold as an opportunity to increase speed and efficiency, this is predicated on an assumption that moving faster always results in better outcomes. In this talk, I will discuss how the insertion of appropriate "friction" into human-AI interaction can in fact lead to slower, more deliberate interactions that intentionally sacrifice short-term speed for long-term task performance gain. Specifically, I will discuss how we develop AI systems that manage and mediate the common ground that emerges in a multi-party collaborative interaction, use these inferred belief states to expose unspoken assumptions and misalignment within the group, and develop a novel "LLM-agent" alignment framework to prompt the group to slow down, deliberate, and resolve misapprehensions before making decisions. Our results show how, in multiple collaborative tasks, the insertion of positive friction correlates with faster convergence to common ground and more correct task solutions. Finally, I will also show how similar mechanisms can be used to discriminate helpful from unhelpful AI interventions, showing a potential path toward the mitigation of deceptive or "rogue" AI behavior.