Recent advances in deep neural network architectures have enabled tremendous success on a number of difficult machine learning problems. While these results are impressive, producing a deployable neural network–based conversation model that can engage in open domain discussion still remains elusive. A dialogue system needs to be able to generate meaningful and diverse responses that are simultaneously coherent with the input utterance and the overall dialogue topic. Unfortunately, earlier conversation models trained with naturalistic dialogue data suffered greatly from limited contextual information, and lack diversity. These problems often lead to generic and safe utterance in response to varieties of input utterance.
In this talk, we will explore some novel neural adversarial learning frameworks that mimic more of human behavior allowing us to generate more human-like dialogue responses than the previous work.
Oluwatobi Olabiyi (PhD) is a Senior Machine Learning Research Scientist at Capital One Conversation AI Research Group. His research focus is specifically on neural generative dialogue and more broadly on decision making under uncertainty. Before joining Capital One, he worked on autonomous vehicle research at Toyota Research Institute. His background is in Signal Processing for Cognitive Communication Systems and earned his MS and PhD degrees in Electrical Engineering from Prairie View (Texas) A&M University where he co-authored more than 30 peer-reviewed articles published in international conferences and highly referred journals.