Abstract
This paper presents the results of a large-scale evaluation study of window-based Distributional Semantic Models on a wide variety of tasks. Our study combines a broad coverage of model parameters with a model selection methodology that is robust to overfitting and able to capture parameter interactions. We show that our strategy allows us to identify parameter configurations that achieve good performance across different datasets and tasks.
This content is only available as a PDF.
©2014 Association for Computational
Linguistics.
2014
Association for Computational Linguistics
This is an open-access article distributed under the terms of the
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
License, which permits you to copy and redistribute in any medium or format,
for non-commercial use only, provided that the original work is not remixed,
transformed, or built upon, and that appropriate credit to the original
source is given. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.