Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Preslav Nakov
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Open AccessPublisher: Journals Gateway
Transactions of the Association for Computational Linguistics (2025) 13: 220–248.
Published: 19 March 2025
FIGURES
| View All (4)
Journal Articles
Publisher: Journals Gateway
Transactions of the Association for Computational Linguistics (2022) 10: 484–502.
Published: 04 May 2022
FIGURES
| View All (4)
Abstract
View articletitled, A Neighborhood Framework for Resource-Lean Content Flagging
View
PDF
for article titled, A Neighborhood Framework for Resource-Lean Content Flagging
We propose a novel framework for cross- lingual content flagging with limited target- language data, which significantly outperforms prior work in terms of predictive performance. The framework is based on a nearest-neighbor architecture. It is a modern instantiation of the vanilla k -nearest neighbor model, as we use Transformer representations in all its components. Our framework can adapt to new source- language instances, without the need to be retrained from scratch. Unlike prior work on neighborhood-based approaches, we encode the neighborhood information based on query– neighbor interactions. We propose two encoding schemes and we show their effectiveness using both qualitative and quantitative analysis. Our evaluation results on eight languages from two different datasets for abusive language detection show sizable improvements of up to 9.5 F1 points absolute (for Italian) over strong baselines. On average, we achieve 3.6 absolute F1 points of improvement for the three languages in the Jigsaw Multilingual dataset and 2.14 points for the WUL dataset.
Journal Articles
Publisher: Journals Gateway
Transactions of the Association for Computational Linguistics (2021) 9: 1061–1080.
Published: 21 September 2021
FIGURES
| View All (8)
Abstract
View articletitled, Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
View
PDF
for article titled, Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-intensive to suit low- capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
Journal Articles
Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications
Open AccessPublisher: Journals Gateway
Transactions of the Association for Computational Linguistics (2019) 7: 121–138.
Published: 01 April 2019
FIGURES
| View All (8)
Abstract
View articletitled, Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications
View
PDF
for article titled, Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications
Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization. Although LSTMs and GRUs were designed to model long-range dependencies more accurately than conventional RNNs, they nevertheless have problems copying or recalling information from the long distant past. Here, we derive a phase-coded representation of the memory state, Rotational Unit of Memory (RUM), that unifies the concepts of unitary learning and associative memory. We show experimentally that RNNs based on RUMs can solve basic sequential tasks such as memory copying and memory recall much better than LSTMs/GRUs. We further demonstrate that by replacing LSTM/GRU with RUM units we can apply neural networks to real-world problems such as language modeling and text summarization, yielding results comparable to the state of the art.