Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,10 @@ license: llama2
|
|
3 |
---
|
4 |
6bits/bpw quantization of [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), to be used on exllamav2.
|
5 |
|
|
|
|
|
|
|
|
|
6 |
# Original model page
|
7 |
|
8 |
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
|
|
3 |
---
|
4 |
6bits/bpw quantization of [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), to be used on exllamav2.
|
5 |
|
6 |
+
Calibration dataset is a cleaned, fixed pippa RP dataset, which does affect the results (in favor) for RP usage. You can find the calibration dataset [here](https://discord.com/channels/1111983596572520458/1152699950208139415/1152700764230271076) (You will need to be on TheBloke server)
|
7 |
+
|
8 |
+
I've added a measurement.json file if you want to do your own quants.
|
9 |
+
|
10 |
# Original model page
|
11 |
|
12 |
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|