log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Learning a variable language
Wednesday, February 3, 2016, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

As an infant learns their native language, they must learn to recognize words in a variety of different contexts: different sentences, spoken at different speeds and in different ways. The infant must learn how much variation is permissible within a single word or sound category, and what sorts of variants are most likely to occur. Computational cognitive models provide insight into the acquisition process by showing what sorts of evidence are most useful for solving these learning problems. Yet many existing models are evaluated on unrealistically non-variable speech data. This talk will present work on modeling variability at the discrete (symbolic) level as well as some preliminary investigations into models of acoustics. I will discuss non-parametric Bayesian systems which learn words and sound categories from data, and compare their predictions to experimental evidence from language acquisition.

Bio

Micha Elsner is an assistant professor at OSU Linguistics, working on computational linguistics with the Clippers lab group. Before coming to Ohio State, he was a postdoctoral researcher at the University of Edinburgh, working with Sharon Goldwater. He got his Ph.D. from Brown University in 2011, advised by Eugene Charniak, with Mark Johnson and Regina Barzilay as committee members. At Brown, he worked in the Brown Laboratory for Linguistic Information Processing (BLLIP). He graduated from the University of Rochester in 2005 with degrees in Computer Science and Classics.

 

This talk is organized by Naomi Feldman