Learning and Inference with Structured Representations
Natural language decisions often involve assigning values to sets of variables where complex and expressive dependencies can influence, or even dictate, what assignments are possible. This is common in natural language tasks ranging from predicting pos tags of words in their context -- governed by sequential constraints such as that no three consecutive words are verbs -- to semantic parsing -- governed by constraints such that certain verbs must have, somewhere in the sentence, three arguments of specific semantic types.
I will describe research on a framework that combines learning and inference for the problem of assigning globally optimal values to a set of variables with complex and expressive dependencies among them.
The inference process of assigning globally optimal values to mutually dependent variables is formalized as an optimization problem and is solved as an integer linear programming (ILP) problem. Two general classes of training processes are presented. In one, the inference process applied to derive a global assignment to the variables of interest is decoupled from the process of learning estimators to variables' values; in the second, dependencies among the variables are incorporated into the learning process, and directly induce estimators that yield a global assignment.
I will show how this framework generalizes existing approaches to the problem of learning structured representations, and discuss the advantages the two training paradigms have in different situations. Examples will be given in the context of semantic role labeling and of information extraction problems.