How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation, and where reliance depends on adherence to community values, such as student assignment algorithms. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect. In this talk, I will first focus on the case of health-care communication to share approaches to improving the reliability of ML-based systems by guiding users to gauge reliability and recover from potential errors. Next, I will focus on the case of student assignment algorithms to examine modeling assumptions and perceptions of fairness in AI systems.