TheYuriLover commited on
Commit
5b84544
·
1 Parent(s): 0f3452c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -10
README.md CHANGED
@@ -1,13 +1,7 @@
1
- [Airoboros 13b GPT4 1.4](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4) merged with kaiokendev's [SuperHOT 8k](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) LoRA.
2
 
3
- The code to merge these can be found [here](https://files.catbox.moe/mg5v4g.py). Change information as needed.
4
 
5
- NOTE: This requires a monkey patch to work. FlashVenom has, along with kindly quantising this model to 4bit, added the monkeypatch file to their repo. You can access this [here](https://huggingface.co/flashvenom/Airoboros-13B-SuperHOT-8K-4bit-GPTQ).
6
 
7
- FROM THE ORIGINAL LORA MODEL CARD:
8
- This is a second prototype of SuperHOT, this time with 4K context and no RLHF. In my testing, it can go all the way to 6K without breaking down and I made the change with intention to reach 8K, so I'll assume it will go to 8K although I only trained on 4K sequences.
9
-
10
- In order to use the 8K context, you will need to apply the monkeypatch I have added in this repo -- without it, it will not work. The patch is very simple, and you can make the changes yourself:
11
-
12
- Increase the max_position_embeddings to 8192 to stretch the sinusoidal
13
- Stretch the frequency steps by a scale of 0.25
 
1
+ This is the gptq 4bit quantization of this model: https://huggingface.co/Peeepy/Airoboros-13b-SuperHOT-8kb
2
 
3
+ This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
4
 
5
+ And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 32)
6
 
7
+ CUDA_VISIBLE_DEVICES=0 python llama.py ./Airoboros-13b-SuperHOT-8k-TRITON-32g-ts-ao c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors Airoboros-13b-SuperHOT-8k-TRITON-32g-ts-ao .safetensors