Skip to Main Content
Table 10: 
Intrinsic and extrinsic experiment results for baselines and LexSub trained with lexical resource from LEAR. We observe a similar trend in the intrinsic and the extrinsic evaluation as to when the models were trained on lexical resources from Section 4.2. This indicates that the LexSub stronger performance is due to our novel subspace-based formulation rather than its ability to better exploit a specific lexical resource.
Relatedness TasksSimilarity Tasks
Modelsmen3k(ρ)WS-353R(ρ)Simlex(ρ)Simverb(ρ)
Vanilla 0.7375 0.4770 0.3705 0.2275 
 
Retrofitting 0.7451 0.4662 0.4561 0.2884 
Counterfitting 0.6034 0.2820 0.5605 0.4260 
LEAR 0.5024 0.2300 0.7273 0.7050 
 
LexSub 0.7562 0.4787 0.4838 0.3371 
(a) Intrinsic evaluation results for for baselines and LexSub trained with lexical resource from LEAR. 
Relatedness TasksSimilarity Tasks
Modelsmen3k(ρ)WS-353R(ρ)Simlex(ρ)Simverb(ρ)
Vanilla 0.7375 0.4770 0.3705 0.2275 
 
Retrofitting 0.7451 0.4662 0.4561 0.2884 
Counterfitting 0.6034 0.2820 0.5605 0.4260 
LEAR 0.5024 0.2300 0.7273 0.7050 
 
LexSub 0.7562 0.4787 0.4838 0.3371 
(a) Intrinsic evaluation results for for baselines and LexSub trained with lexical resource from LEAR. 
Close Modal

or Create an Account

Close Modal
Close Modal