Figure 1: 
Architecture of Sgcp (proposed method). Sgcp aims to
                        paraphrase an input sentence, while conforming to the syntax of an exemplar
                        sentence (provided along with the input). The input sentence is encoded
                        using the Sentence Encoder (Section
                            3.2) to obtain a semantic signal ct. The Syntactic
                        Encoder (Section 3.3) takes a
                        constituency parse tree (pruned at height H) of the
                        exemplar sentence as an input, and produces representations for all the
                        nodes in the pruned tree. Once both of these are encoded, the Syntactic
                        Paraphrase Decoder (Section 3.4)
                        uses pointer-generator network, and at each time step takes the semantic
                        signal ct, the decoder
                        recurrent state st, embedding
                        of the previous token and syntactic signal htY to generate a new token. Note that the
                        syntactic signal remains the same for each token in a span (shown in figure
                        above curly braces; please see Figure 2 for more details). The gray shaded region (not part of the model)
                        illustrates a qualitative comparison of the exemplar syntax tree and the
                        syntax tree obtained from the generated paraphrase. Please refer to Section 3 for details.

Architecture of Sgcp (proposed method). Sgcp aims to paraphrase an input sentence, while conforming to the syntax of an exemplar sentence (provided along with the input). The input sentence is encoded using the Sentence Encoder (Section 3.2) to obtain a semantic signal ct. The Syntactic Encoder (Section 3.3) takes a constituency parse tree (pruned at height H) of the exemplar sentence as an input, and produces representations for all the nodes in the pruned tree. Once both of these are encoded, the Syntactic Paraphrase Decoder (Section 3.4) uses pointer-generator network, and at each time step takes the semantic signal ct, the decoder recurrent state st, embedding of the previous token and syntactic signal htY to generate a new token. Note that the syntactic signal remains the same for each token in a span (shown in figure above curly braces; please see Figure 2 for more details). The gray shaded region (not part of the model) illustrates a qualitative comparison of the exemplar syntax tree and the syntax tree obtained from the generated paraphrase. Please refer to Section 3 for details.

Close Modal

or Create an Account

Close Modal
Close Modal