munish0838
commited on
Commit
•
a6dc33e
1
Parent(s):
ff0e207
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
license: llama3
|
5 |
+
datasets:
|
6 |
+
- princeton-nlp/prolong-data-64K
|
7 |
+
- princeton-nlp/prolong-data-512K
|
8 |
+
base_model:
|
9 |
+
- princeton-nlp/Llama-3-8B-ProLong-64k-Base
|
10 |
+
|
11 |
+
---
|
12 |
+
|
13 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
14 |
+
|
15 |
+
|
16 |
+
# QuantFactory/Llama-3-8B-ProLong-512k-Base-GGUF
|
17 |
+
This is quantized version of [princeton-nlp/Llama-3-8B-ProLong-512k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Base) created using llama.cpp
|
18 |
+
|
19 |
+
# Original Model Card
|
20 |
+
|
21 |
+
|
22 |
+
# princeton_nlp/Llama-3-8B-ProLong-512k-Base
|
23 |
+
|
24 |
+
[[Paper](https://arxiv.org/pdf/2410.02660)] [[HF Collection](https://huggingface.co/collections/princeton-nlp/prolong-66c72d55d2051a86ac7bd7e4)] [[Code](https://github.com/princeton-nlp/ProLong)]
|
25 |
+
|
26 |
+
|
27 |
+
**ProLong** (<u>Pr</u>incet<u>o</u>n <u>long</u>-context language models) is a family of long-context models that are continued trained and supervised fine-tuned from Llama-3-8B, with a maximum context window of 512K tokens. Our [main ProLong model](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct) is one of the best-performing long-context models at the 10B scale (evaluated by [HELMET](https://github.com/princeton-nlp/helmet)).
|
28 |
+
|
29 |
+
To train this strong long-context model, we conduct thorough ablations on the long-context pre-training data, SFT data, and numerous other design choices. We demonstrate our findings in our paper, [How to Train Long-Context Language Models (Effectively)](https://arxiv.org/pdf/2410.02660).
|
30 |
+
|
31 |
+
|
32 |
+
Authors: [Tianyu Gao](https://gaotianyu.xyz/about)\*, [Alexander Wettig](https://www.cs.princeton.edu/~awettig/)\*, [Howard Yen](https://howard-yen.github.io/), [Danqi Chen](https://www.cs.princeton.edu/~danqic/) (* equal contribution)
|
33 |
+
|
34 |
+
Contact: `{tianyug, awettig}@princeton.edu`
|
35 |
+
|
36 |
+
## The ProLong Models
|
37 |
+
|
38 |
+
- [princeton_nlp/Llama-3-8B-ProLong-64k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Base)
|
39 |
+
- [princeton_nlp/Llama-3-8B-ProLong-64k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Instruct)
|
40 |
+
- [princeton_nlp/Llama-3-8B-ProLong-512k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Base) ← you are here!
|
41 |
+
- ⭐ [princeton_nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct)
|
42 |
+
|
43 |
+
## Model card
|
44 |
+
|
45 |
+
Here are some quick facts about our main ProLong model: [princeton-nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct).
|
46 |
+
* Base model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
47 |
+
* Long-context continued training: 20B tokens on 64K training data ([princeton-nlp/prolong-data-64K](https://huggingface.co/datasets/princeton-nlp/prolong-data-64K)), and 20B tokens on 512K training data ([princeton-nlp/prolong-data-512K](https://huggingface.co/datasets/princeton-nlp/prolong-data-512K))
|
48 |
+
* Supervised fine-tuning (SFT): [UltraChat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
|
49 |
+
* Maximum context window: 512K tokens
|
50 |
+
|
51 |
+
|
52 |
+
<p align="center" style="margin-bottom: 0;">
|
53 |
+
<img width="80%" alt="image" src="https://github.com/user-attachments/assets/c31c9671-49fe-4776-91d2-de70ffd9f9a1">
|
54 |
+
</p>
|
55 |
+
<p align="center" style="margin-top: 0; padding-top: 0;">
|
56 |
+
<em>ProLong performance on <a href="https://github.com/princeton-nlp/helmet">HELMET</a> averaged over 32K, 64K, and 128K lengths. All models are instruct models.</em>
|
57 |
+
</p>
|
58 |
+
|
59 |
+
|
60 |
+
<p align="center">
|
61 |
+
<img width="80%" alt="image" src="https://github.com/user-attachments/assets/a36a7d0f-4480-4a29-80f3-208477707fb7">
|
62 |
+
</p>
|
63 |
+
<p align="center" style="margin-top: 0;">
|
64 |
+
<em>ProLong training recipe.</em>
|
65 |
+
</p>
|
66 |
+
|
67 |
+
|
68 |
+
## Citation
|
69 |
+
|
70 |
+
```bibtex
|
71 |
+
@article{gao2024prolong,
|
72 |
+
title={Enabling Large Language Models to Generate Text with Citations},
|
73 |
+
author={Gao, Tianyu and Wettig, Alexander and Yen, Howard and Chen, Danqi},
|
74 |
+
year={2024},
|
75 |
+
}
|
76 |
+
```
|