- This event has passed.
MIDAS Seminar Series Presents: Edward Stabler – UCLA
February 10 @ 3:30 pm - 4:30 pm
Room 340 West Hall
Professor of Linguistics, University of California, Los Angeles
Adversarial Training for Semantic Parsing
ABSTRACT: Semantic parsing maps natural language sentences to formal representations of their meanings. In most natural language processing applications, the semantic representation is stripped down to just what is needed for the relevant task, and enriched with a task-specific understanding of what is going on. And when more than one meaning is possible, the right one, in context, should be chosen. This makes semantic parsing easier than linguistic conceptions of parsing in some respects, and much harder in others. Learning semantic mappings with current deep learning systems typically requires extensive meaning-annotated data that can be very expensive to collect. To address this problem, two ideas from visual learning can be adapted to language domain. First, as in visual tasks, linguistic tasks have abstract invariants which can be identified and respected. Then, it is easy to construct examples that respect the invariants, and which are structurally close to training data, but misclassified by the learner. In vision, when such ‘adversarial’ examples differ imperceptibly from training data, they have three valuable properties: (a) they are still relevant to the task, (b) they need no new annotation because they do not change what is to be recognized, and (c) they can be added to the training data to correct errors made by the learning system. In the language domain, we can similarly identify natural invariant-preserving adversarial examples that are worst-case in the sense that they minimize the structural distance from the training examples and maximize the error of the learning system. And these examples can be chosen so that compositional adjustments on the output side remove the need for hand annotation. This brings all three valuable properties of adversarial examples (a-c) to the linguistic domain in a maximally general way, applicable in many settings. The method extends to semantic parsing for dialogue, where dialogue invariants are respected, and where the need for extensive training is especially acute. Comparing the adversarial training of various deep learning architectures provides an interesting perspective on their strengths and weaknesses.
For more information on MIDAS or the Seminar Series, please contact firstname.lastname@example.org.