Ohio State nav bar

Colloquium by Sam Bowman (NYU): Learning Acceptability Judgments from Raw Text Alone

Ohio Union Seal
October 25, 2019
3:55PM - 5:15PM
Oxley Hall 103

Date Range
Add to Calendar 2019-10-25 15:55:00 2019-10-25 17:15:00 Colloquium by Sam Bowman (NYU): Learning Acceptability Judgments from Raw Text Alone Abstract: Over the last two years, artificial neural network models have come close to (and in many cases surpassed) human-level performance on most preexisting benchmarks for language understanding. While many of these benchmarks have known limitations, these models are nonetheless strikingly effective, and it is increasingly plausible that they acquire substantial knowledge of the structure of English during a training procedure that relies almost exclusively on raw unannotated text. This talk surveys an ongoing line of research that attempts to use acceptability judgments as a lens through which to understand what these models are learning, and presents initial results that suggest that it is possible to learn to produce human-like patterns of acceptability judgments from raw text alone. In particular, I will briefly survey the striking results that the field has seen with large-scale neural network language models like ELMo, GPT, and BERT; and then discuss experiments with the CoLA corpus of acceptability judgments from published Linguistics literature and the forthcoming BLiMP corpus of expert-constructed minimal pairs. Oxley Hall 103 Department of Linguistics linguistics@osu.edu America/New_York public

Abstract: Over the last two years, artificial neural network models have come close to (and in many cases surpassed) human-level performance on most preexisting benchmarks for language understanding. While many of these benchmarks have known limitations, these models are nonetheless strikingly effective, and it is increasingly plausible that they acquire substantial knowledge of the structure of English during a training procedure that relies almost exclusively on raw unannotated text. 

This talk surveys an ongoing line of research that attempts to use acceptability judgments as a lens through which to understand what these models are learning, and presents initial results that suggest that it is possible to learn to produce human-like patterns of acceptability judgments from raw text alone. In particular, I will briefly survey the striking results that the field has seen with large-scale neural network language models like ELMo, GPT, and BERT; and then discuss experiments with the CoLA corpus of acceptability judgments from published Linguistics literature and the forthcoming BLiMP corpus of expert-constructed minimal pairs.

Events Filters: