RichardErkhov commited on
Commit
16533c8
·
verified ·
1 Parent(s): 3a906c3

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ NarutoDolphin-10B - GGUF
11
+ - Model creator: https://huggingface.co/FelixChao/
12
+ - Original model: https://huggingface.co/FelixChao/NarutoDolphin-10B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [NarutoDolphin-10B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q2_K.gguf) | Q2_K | 3.73GB |
18
+ | [NarutoDolphin-10B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
19
+ | [NarutoDolphin-10B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ3_S.gguf) | IQ3_S | 4.37GB |
20
+ | [NarutoDolphin-10B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
21
+ | [NarutoDolphin-10B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ3_M.gguf) | IQ3_M | 4.51GB |
22
+ | [NarutoDolphin-10B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K.gguf) | Q3_K | 4.84GB |
23
+ | [NarutoDolphin-10B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
24
+ | [NarutoDolphin-10B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
25
+ | [NarutoDolphin-10B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
26
+ | [NarutoDolphin-10B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_0.gguf) | Q4_0 | 5.66GB |
27
+ | [NarutoDolphin-10B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
28
+ | [NarutoDolphin-10B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
29
+ | [NarutoDolphin-10B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_K.gguf) | Q4_K | 6.02GB |
30
+ | [NarutoDolphin-10B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
31
+ | [NarutoDolphin-10B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q4_1.gguf) | Q4_1 | 6.27GB |
32
+ | [NarutoDolphin-10B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_0.gguf) | Q5_0 | 6.89GB |
33
+ | [NarutoDolphin-10B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
34
+ | [NarutoDolphin-10B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_K.gguf) | Q5_K | 7.08GB |
35
+ | [NarutoDolphin-10B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
36
+ | [NarutoDolphin-10B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q5_1.gguf) | Q5_1 | 7.51GB |
37
+ | [NarutoDolphin-10B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q6_K.gguf) | Q6_K | 8.2GB |
38
+ | [NarutoDolphin-10B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_NarutoDolphin-10B-gguf/blob/main/NarutoDolphin-10B.Q8_0.gguf) | Q8_0 | 10.62GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ tags:
47
+ - merge
48
+ - FelixChao/WizardDolphin-7B
49
+ - FelixChao/NinjaDolphin-7B
50
+ ---
51
+
52
+ # NarutoDolphin-10B
53
+
54
+ NarutoDolphin-10B is a merge of the following models:
55
+ * [FelixChao/WizardDolphin-7B](https://huggingface.co/FelixChao/WizardDolphin-7B)
56
+ * [FelixChao/NinjaDolphin-7B](https://huggingface.co/FelixChao/NinjaDolphin-7B)
57
+
58
+ # Quantizationed version
59
+
60
+ Quantizationed version of this model is available thanks to [s3nh](https://huggingface.co/s3nh).
61
+
62
+ ##### GGUF
63
+
64
+ - [s3nh/NarutoDolphin-10B-GGUF](https://huggingface.co/s3nh/NarutoDolphin-10B-GGUF)
65
+
66
+
67
+ ## 💻 Usage
68
+
69
+ ```python
70
+ !pip install -qU transformers accelerate
71
+
72
+ from transformers import AutoTokenizer
73
+ import transformers
74
+ import torch
75
+
76
+ model = "FelixChao/NarutoDolphin-10B"
77
+ messages = [{"role": "user", "content": "What is a large language model?"}]
78
+
79
+ tokenizer = AutoTokenizer.from_pretrained(model)
80
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
81
+ pipeline = transformers.pipeline(
82
+ "text-generation",
83
+ model=model,
84
+ torch_dtype=torch.float16,
85
+ device_map="auto",
86
+ )
87
+
88
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
89
+ print(outputs[0]["generated_text"])
90
+ ```
91
+