Update README.md
Browse files
README.md
CHANGED
@@ -184,6 +184,7 @@ for those of you familiar with the project.
|
|
184 |
The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition.
|
185 |
|
186 |
It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
|
|
|
187 |
```
|
188 |
python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
|
189 |
```
|
|
|
184 |
The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition.
|
185 |
|
186 |
It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
|
187 |
+
|
188 |
```
|
189 |
python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
|
190 |
```
|