Walkthrough example of our approach. The task input includes information about the entities participating in the relation (denoted as subject and object) and their types (PERSON here). Our neural architecture, which includes both a relation and explanation classifier, predicts the relation that holds between the two entities (per:children here, i.e., the object is the child of the subject), as well as which words best explain the decision (in red). In step (c), the rule generator collects the necessary information from the annotated sentence, i.e., the shortest syntactic dependency path that connects the two entities with the explanation words (in red in the figure). Step (d) shows the generated rule in the Odin language.