Recall@1 for retrieving passage containing a query from three million candidates. The figure compares a fine-tuned bert-based dual encoder (de-bert-768), an off-the-shelf bert-based encoder with average pooling (bert-init), and sparse term-based retrieval (bm25), while binning passages by length.
This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy. No content on this site may be used to train artificial intelligence systems without permission in writing from the MIT Press.