Much of the language we experience on a day-to-day basis is composed of familiar as opposed to novel combinations. These combinations are understood more easily and produced more fluently, suggesting that their processing is more automatized. However, the mechanisms responsible for automatization in language are not well-understood. In this talk, I will examine the factors contributing to automatization in language. I will argue that automatization influences the relative availability of contextual and top-down cues such that top-down cues differ in relative strength between and within automatized units. I will share data from two studies: one on repetitions in disfluent adult speech and the other on morphosyntactic deficits in children. In both studies, the likelihood of generalizing a form to novel contexts is found to be reduced when the form repeatedly occurs within automatized units. These results provide support for viewing automatization as the process of learning where top-down support may be needed and when planning may proceed largely on the basis of contextual cues.
Zara Harmon is a Post-doctoral Associate at the University of Maryland Institute for Advanced Computer Studies (UMIACS) and the Department of Linguistics. She received her PhD in linguistics from the University of Oregon in 2019 and joined the University of Maryland the same year. Her research explores how speakers extend familiar forms to novel contexts, and factors that influence the ease or difficulty of this process. Currently, she is using probabilistic models to investigate problems with productivity of grammatical morphemes in the speech of children with Developmental Language Disorder (DLD).