|
--- |
|
license: apache-2.0 |
|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
**Paper**: [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788) |
|
|
|
**Code**: https://github.com/princeton-nlp/AutoCompressors |
|
|
|
**Models**: |
|
- Llama-2-7b fine-tuned models: [AutoCompressor-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-Llama-2-7b-6k/), [FullAttention-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/FullAttention-Llama-2-7b-6k) |
|
- OPT-2.7b fine-tuned models: [AutoCompressor-2.7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-6k), [AutoCompressor-2.7b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-30k), [RMT-2.7b-8k](https://huggingface.co/princeton-nlp/RMT-2.7b-8k), [FullAttention-2.7b-4k](https://huggingface.co/princeton-nlp/FullAttention-2.7b-4k) |
|
- OPT-1.3b fine-tuned models: [AutoCompressor-1.3b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-1.3b-30k), [RMT-1.3b-30k](https://huggingface.co/princeton-nlp/RMT-1.3b-30k) |
|
|
|
--- |
|
|
|
RMT-1.3b-30k is a model fine-tuned from [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) following the RMT method as described in [Recurrent Memory Transformer](https://arxiv.org/abs/2207.06881) and [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788). |
|
This model is fine-tuned on 2B tokens from Books3 in [The Pile](https://pile.eleuther.ai). The pre-trained OPT-1.3b model is fine-tuned on sequences of 30,720 tokens with 50 summary vectors, summary accumulation, randomized segmenting, and stop-gradients. |
|
|
|
To get started, download the [`AutoCompressor`](https://github.com/princeton-nlp/AutoCompressors) repository and load the model as follows: |
|
|
|
``` |
|
from auto_compressor import AutoCompressorModel |
|
|
|
model = AutoCompressorModel.from_pretrained("princeton-nlp/RMT-1.3b-30k") |
|
``` |
|
|
|
**Evaluation** |
|
|
|
We record the perplexity achieved by our 30k-fine-tuned OPT models on segments of 2,048 tokens sampled from Books3 and ArXiv in The Pile, conditioned on different amounts of context. |
|
|
|
|
|
| Context Tokens | 0 |14,336 | 28,672 | |
|
| -----------------------------|------|--------|--------| |
|
| RMT-1.3b-30k | 13.18|12.50 |12.50 | |
|
| AutoCompressor-1.3b-30k | 13.21|12.49 |12.47 | |
|
| AutoCompressor-2.7b-30k | 11.86|11.21 |11.18 | |
|
|
|
|
|
|
|
|
|
## Bibtex |
|
``` |
|
@misc{chevalier2023adapting, |
|
title={Adapting Language Models to Compress Contexts}, |
|
author={Alexis Chevalier and Alexander Wettig and Anirudh Ajith and Danqi Chen}, |
|
year={2023}, |
|
eprint={2305.14788}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |