Ohio State nav bar

Colloquium by Michael White (OSU): Does Neural NLG Need Linguistics?

Michael White
February 21, 2020
3:55PM - 5:15PM
Oxley Hall 103

Date Range
Add to Calendar 2020-02-21 15:55:00 2020-02-21 17:15:00 Colloquium by Michael White (OSU): Does Neural NLG Need Linguistics? Over the past few years, a deep learning wave has swept over the computational linguistics community, displacing methods that made explicit use of linguistic knowledge and representations. In the subfield of natural language generation (NLG) in particular, neural methods arrived with much fanfare and quickly became the dominant method employed in recent shared task challenges. While neural methods promise robust and flexible models that are easy to train across languages, recent studies have revealed their inability to produce satisfactory output for longer or more complex texts, as well as how the black-box nature of these models makes them difficult to control, in contrast to traditional NLG architectures. In this talk, I will review recent work by my group and others that makes use explicit, linguistically-motived intermediate representations in neural NLG in order to improve coherence and cohesion, and speculate on ways in which linguistically-informed models of production and comprehension might find their way into neural NLG going forward. Oxley Hall 103 Department of Linguistics linguistics@osu.edu America/New_York public

Over the past few years, a deep learning wave has swept over the computational linguistics community, displacing methods that made explicit use of linguistic knowledge and representations. In the subfield of natural language generation (NLG) in particular, neural methods arrived with much fanfare and quickly became the dominant method employed in recent shared task challenges. While neural methods promise robust and flexible models that are easy to train across languages, recent studies have revealed their inability to produce satisfactory output for longer or more complex texts, as well as how the black-box nature of these models makes them difficult to control, in contrast to traditional NLG architectures. In this talk, I will review recent work by my group and others that makes use explicit, linguistically-motived intermediate representations in neural NLG in order to improve coherence and cohesion, and speculate on ways in which linguistically-informed models of production and comprehension might find their way into neural NLG going forward.

Events Filters: