Figure 1: 
An example from C losure illustrating how our model learns a latent structure over the input, where a representation and denotation is computed for every span (for denotation we show the set of objects with probability > 0.5). For brevity, some phrases were merged to a single node of the tree. For each phrase, we show the split point and module with the highest probability, although all possible split points and module outputs are softly computed. S kip(L) and S kip(R) refer to taking the denotation of the left or right sub-span, respectively.

An example from C losure illustrating how our model learns a latent structure over the input, where a representation and denotation is computed for every span (for denotation we show the set of objects with probability > 0.5). For brevity, some phrases were merged to a single node of the tree. For each phrase, we show the split point and module with the highest probability, although all possible split points and module outputs are softly computed. S kip(L) and S kip(R) refer to taking the denotation of the left or right sub-span, respectively.

Close Modal

or Create an Account

Close Modal
Close Modal