Ohio State nav bar

Virtual Colloquium by Allyson Ettinger (Chicago): "Understanding" and prediction: Controlled examinations of meaning extraction in natural language processing models

Oxley Hall
September 24, 2021
3:55PM - 5:15PM
Virtual zoom meeting

Date Range
Add to Calendar 2021-09-24 15:55:00 2021-09-24 17:15:00 Virtual Colloquium by Allyson Ettinger (Chicago): "Understanding" and prediction: Controlled examinations of meaning extraction in natural language processing models Abstract:  In recent years, the field of natural language processing (NLP) has made what appears to be incredible progress, with models even surpassing human performance on certain benchmark evaluations. How should we interpret these advances? Have these models achieved so-called language "understanding"? Operating on the premise that "understanding" will necessarily involve the capacity to extract and deploy meaning information, in this talk I will discuss a series of projects leveraging targeted tests to examine NLP models' ability to capture meaning in a systematic fashion. I will first discuss work probing model representations for compositional meaning, with a particular focus on disentangling compositional information from encoding of lexical properties. I'll then explore models' ability to extract and deploy meaning information during word prediction, applying tests inspired by psycholinguistics to examine the types of information that models are able to encode and access for anticipating words in context. In all cases, these investigations draw on insights and methods from linguistics and cognitive science in order to maintain human-driven standards for what constitutes language "understanding", and to ensure that tests are adequately controlled. The results of these studies suggest that although NLP models show a good deal of sensitivity to word-level information, and to a number of semantic and syntactic distinctions, they show little sign of capturing higher-level compositional meaning, of handling logical impacts of meaning components like negation, or of retaining access to detailed representations of meaning information conveyed in prior context. I will discuss implications of the findings both for currently dominant training paradigms in NLP, and for the study of language processing in humans. Allyson Ettinger is Assistant Professor in the Department of Linguistics at the University of Chicago.  Accommodation statement: If you require an accommodation such as live captioning or interpretation to participate in this event, please contact Ashwini Deo at deo.13@osu.edu. In general, requests made two weeks before the event will allow us to provide seamless access, but the university will make every effort to meet requests made after this date.  Virtual zoom meeting Department of Linguistics linguistics@osu.edu America/New_York public

Abstract:  In recent years, the field of natural language processing (NLP) has made what appears to be incredible progress, with models even surpassing human performance on certain benchmark evaluations. How should we interpret these advances? Have these models achieved so-called language "understanding"? Operating on the premise that "understanding" will necessarily involve the capacity to extract and deploy meaning information, in this talk I will discuss a series of projects leveraging targeted tests to examine NLP models' ability to capture meaning in a systematic fashion. I will first discuss work probing model representations for compositional meaning, with a particular focus on disentangling compositional information from encoding of lexical properties. I'll then explore models' ability to extract and deploy meaning information during word prediction, applying tests inspired by psycholinguistics to examine the types of information that models are able to encode and access for anticipating words in context. In all cases, these investigations draw on insights and methods from linguistics and cognitive science in order to maintain human-driven standards for what constitutes language "understanding", and to ensure that tests are adequately controlled. The results of these studies suggest that although NLP models show a good deal of sensitivity to word-level information, and to a number of semantic and syntactic distinctions, they show little sign of capturing higher-level compositional meaning, of handling logical impacts of meaning components like negation, or of retaining access to detailed representations of meaning information conveyed in prior context. I will discuss implications of the findings both for currently dominant training paradigms in NLP, and for the study of language processing in humans.

Allyson Ettinger is Assistant Professor in the Department of Linguistics at the University of Chicago. 

Accommodation statement: If you require an accommodation such as live captioning or interpretation to participate in this event, please contact Ashwini Deo at deo.13@osu.edu. In general, requests made two weeks before the event will allow us to provide seamless access, but the university will make every effort to meet requests made after this date. 

Events Filters: