|
--- |
|
license: apache-2.0 |
|
base_model: PygmalionAI/Pygmalion-3-12B |
|
tags: |
|
- roleplay |
|
- conversational |
|
- llama-cpp |
|
- gguf-my-repo |
|
--- |
|
|
|
# Triangle104/Pygmalion-3-12B-Q5_K_S-GGUF |
|
This model was converted to GGUF format from [`PygmalionAI/Pygmalion-3-12B`](https://huggingface.co/PygmalionAI/Pygmalion-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/PygmalionAI/Pygmalion-3-12B) for more details on the model. |
|
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Dataset |
|
- |
|
|
|
|
|
|
|
We've gathered a large collection of instructions and roleplaying totaling hundreds of millions of tokens, including our PIPPA dataset and roleplaying forums. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Limitations and biases |
|
- |
|
|
|
|
|
|
|
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. |
|
|
|
|
|
As such, it was not fine-tuned to be safe and |
|
harmless: the base model and this fine-tune have been trained on data |
|
known to contain profanity and texts that are lewd or otherwise |
|
offensive. It may produce socially unacceptable or undesirable text, |
|
even if the prompt itself does not include anything explicitly |
|
offensive. Outputs might often be factually wrong or misleading. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Training Specifications |
|
- |
|
|
|
We trained our model as a rank-32 LoRA adapter with one epoch over |
|
our data using 8x NVIDIA A40 GPUs. For this run, we employed a learning |
|
rate of 2e-4 and a total batch size across all GPUs of 24. A cosine |
|
learning rate scheduler was used with a 100 step warmup. DeepSpeed ZeRO |
|
was used to successfully get memory usage down. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Acknowledgements |
|
- |
|
|
|
|
|
|
|
This project could not have been done without the compute support of Hive Digital Technologies and the Axolotl training software. |
|
|
|
|
|
We'd like to extensively thank lemonilia for their wonderful help in compiling roleplay forum data. |
|
|
|
|
|
And most of all, we dedicate this model to our great community, |
|
who've stuck with us through everything until now. Sincerely, thank you |
|
so much. We hope you enjoy our work to the fullest and we promise more |
|
is on the way soon. |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/Pygmalion-3-12B-Q5_K_S-GGUF --hf-file pygmalion-3-12b-q5_k_s.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/Pygmalion-3-12B-Q5_K_S-GGUF --hf-file pygmalion-3-12b-q5_k_s.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/Pygmalion-3-12B-Q5_K_S-GGUF --hf-file pygmalion-3-12b-q5_k_s.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/Pygmalion-3-12B-Q5_K_S-GGUF --hf-file pygmalion-3-12b-q5_k_s.gguf -c 2048 |
|
``` |
|
|