Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) NarutoDolphin-10B - GGUF - Model creator: https://huggingface.co/FelixChao/ - Original model: https://huggingface.co/FelixChao/NarutoDolphin-10B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [NarutoDolphin-10B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q2_K.gguf) | Q2_K | 3.73GB | | [NarutoDolphin-10B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ3_XS.gguf) | IQ3_XS | 4.14GB | | [NarutoDolphin-10B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ3_S.gguf) | IQ3_S | 4.37GB | | [NarutoDolphin-10B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K_S.gguf) | Q3_K_S | 4.34GB | | [NarutoDolphin-10B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ3_M.gguf) | IQ3_M | 4.51GB | | [NarutoDolphin-10B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K.gguf) | Q3_K | 4.84GB | | [NarutoDolphin-10B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K_M.gguf) | Q3_K_M | 4.84GB | | [NarutoDolphin-10B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K_L.gguf) | Q3_K_L | 5.26GB | | [NarutoDolphin-10B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ4_XS.gguf) | IQ4_XS | 5.43GB | | [NarutoDolphin-10B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_0.gguf) | Q4_0 | 5.66GB | | [NarutoDolphin-10B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ4_NL.gguf) | IQ4_NL | 5.72GB | | [NarutoDolphin-10B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_K_S.gguf) | Q4_K_S | 5.7GB | | [NarutoDolphin-10B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_K.gguf) | Q4_K | 6.02GB | | [NarutoDolphin-10B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_K_M.gguf) | Q4_K_M | 6.02GB | | [NarutoDolphin-10B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_1.gguf) | Q4_1 | 6.27GB | | [NarutoDolphin-10B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_0.gguf) | Q5_0 | 6.89GB | | [NarutoDolphin-10B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_K_S.gguf) | Q5_K_S | 6.89GB | | [NarutoDolphin-10B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_K.gguf) | Q5_K | 7.08GB | | [NarutoDolphin-10B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_K_M.gguf) | Q5_K_M | 7.08GB | | [NarutoDolphin-10B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_1.gguf) | Q5_1 | 7.51GB | | [NarutoDolphin-10B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q6_K.gguf) | Q6_K | 8.2GB | | [NarutoDolphin-10B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q8_0.gguf) | Q8_0 | 10.62GB | Original model description: --- license: apache-2.0 tags: - merge - FelixChao/WizardDolphin-7B - FelixChao/NinjaDolphin-7B --- # NarutoDolphin-10B NarutoDolphin-10B is a merge of the following models: * [FelixChao/WizardDolphin-7B](https://huggingface.co/FelixChao/WizardDolphin-7B) * [FelixChao/NinjaDolphin-7B](https://huggingface.co/FelixChao/NinjaDolphin-7B) # Quantizationed version Quantizationed version of this model is available thanks to [s3nh](https://huggingface.co/s3nh). ##### GGUF - [s3nh/NarutoDolphin-10B-GGUF](https://huggingface.co/s3nh/NarutoDolphin-10B-GGUF) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "FelixChao/NarutoDolphin-10B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```