Reinforcement learning techniques aim to solve complex decision making problems entirely by interaction with an environment, together with an external reward function that signals whether the model's behavior is good or bad. While there have been some astonishing success cases here---winning Atari games, Go, Starcraft, DOTA and more---such successes are incredibly data hungry: they need an unreasonably large number of trials in order to learn. As a result, these techniques are largely limited to fully simulated settings (like games) that can be played, and failed at, millions or billions of times before success. I'll discuss some work in two high-level directions that aim to bring a human into the loop in order to make learning more feasible for lower resource settings. The first is algorithms that give new ways for experts to give "advice" to the learning algorithm; the second is learning techniques that can learn to ask for help from humans in the environment when they need it.