Advances in neural language models and distributional semantics have driven home the latent power of parameter sharing for natural language processing tasks. Human language processing similarly depends on the inter-relationships between words as well as hierarchical structure. In this talk, I will present a technique that estimates the probabilities of quasi-semantic categories in context and show that the semantic structure of the linguistic future influences both human production times as well as eye movements during reading. I will additionally discuss some architectural considerations for applying different kinds of neural language models to this modeling task, with a focus on their geometric properties and objectives.
Dr. Cass Jacobs is a cognitive scientist and Assistant Professor of Linguistics at the University at Buffalo, where they run the Computational Linguistics and Cognition (CaLiCo) Lab. They use computational modeling and behavioral experiments to understand the cognitive processes that support language comprehension and production, particularly the way that learning and memory shape human linguistic knowledge.