Submitted by Singularian2501 t3_zr2en7 in MachineLearning
Paper: https://arxiv.org/abs/2212.01349
Github: https://github.com/facebookresearch/NPM
Abstract:
>Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked language model that replaces this softmax with a nonparametric distribution over every phrase in a reference corpus. We show that NPM can be efficiently trained with a contrastive objective and an in-batch approximation to full corpus retrieval. Zero-shot evaluation on 9 closed-set tasks and 7 open-set tasks demonstrates that NPM outperforms significantly larger parametric models, either with or without a retrieve-and-generate approach. It is particularly better on dealing with rare patterns (word senses or facts), and predicting rare or nearly unseen words (e.g., non-Latin script).
Dankmemexplorer t1_j123o1b wrote
time to train gpt-4 on my mom's laptop