RichardErkhov commited on
Commit
e3378bc
1 Parent(s): 7688184

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ TenyxChat-8x7B-v1 - GGUF
11
+ - Model creator: https://huggingface.co/tenyx/
12
+ - Original model: https://huggingface.co/tenyx/TenyxChat-8x7B-v1/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [TenyxChat-8x7B-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q2_K.gguf) | Q2_K | 16.12GB |
18
+ | [TenyxChat-8x7B-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
19
+ | [TenyxChat-8x7B-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.IQ3_S.gguf) | IQ3_S | 19.03GB |
20
+ | [TenyxChat-8x7B-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
21
+ | [TenyxChat-8x7B-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.IQ3_M.gguf) | IQ3_M | 11.99GB |
22
+ | [TenyxChat-8x7B-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q3_K.gguf) | Q3_K | 21.0GB |
23
+ | [TenyxChat-8x7B-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
24
+ | [TenyxChat-8x7B-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
25
+ | [TenyxChat-8x7B-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
26
+ | [TenyxChat-8x7B-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q4_0.gguf) | Q4_0 | 24.63GB |
27
+ | [TenyxChat-8x7B-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
28
+ | [TenyxChat-8x7B-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
29
+ | [TenyxChat-8x7B-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q4_K.gguf) | Q4_K | 26.49GB |
30
+ | [TenyxChat-8x7B-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
31
+ | [TenyxChat-8x7B-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q4_1.gguf) | Q4_1 | 27.32GB |
32
+ | [TenyxChat-8x7B-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q5_0.gguf) | Q5_0 | 30.02GB |
33
+ | [TenyxChat-8x7B-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
34
+ | [TenyxChat-8x7B-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q5_K.gguf) | Q5_K | 30.95GB |
35
+ | [TenyxChat-8x7B-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
36
+ | [TenyxChat-8x7B-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q5_1.gguf) | Q5_1 | 32.71GB |
37
+ | [TenyxChat-8x7B-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/blob/main/TenyxChat-8x7B-v1.Q6_K.gguf) | Q6_K | 35.74GB |
38
+ | [TenyxChat-8x7B-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/tenyx_-_TenyxChat-8x7B-v1-gguf/tree/main/) | Q8_0 | 46.22GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ language:
47
+ - en
48
+ library_name: transformers
49
+ tags:
50
+ - tenyx-fine-tuning
51
+ - dpo
52
+ - tenyxchat
53
+ datasets:
54
+ - HuggingFaceH4/ultrafeedback_binarized
55
+ ---
56
+ # TenyxChat: Language Model Alignment using Tenyx Fine-tuning
57
+
58
+ Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
59
+
60
+ We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
61
+ similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
62
+ Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
63
+ TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
64
+
65
+ # Model details
66
+
67
+ - Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
68
+ - License: Apache 2.0
69
+ - Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
70
+ - Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
71
+
72
+ ## Usage
73
+
74
+ Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
75
+
76
+ ### Chat Template (Jinja)
77
+
78
+ ```rust
79
+ {{ bos_token }}
80
+ {% for message in messages %}
81
+ {% if message['role'] == 'user' %}
82
+ {{ '[INST]' + message['content'] + '[/INST]' }}
83
+ {% elif message['role'] == 'system' %}
84
+ {{ '[INST]' + message['content'] + '[/INST]' }}
85
+ {% elif message['role'] == 'assistant' %}
86
+ {{ message['content'] + eos_token }}
87
+ {% endif %}
88
+ {% endfor %}
89
+ ```
90
+
91
+ ### Hugging face Example
92
+
93
+ ```python
94
+ import torch
95
+ from transformers import pipeline
96
+
97
+ pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
98
+
99
+ messages = [
100
+ {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
101
+ {"role": "user", "content": "Hi. I would like to make a hotel booking."},
102
+ ]
103
+
104
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
105
+ outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
106
+ ```
107
+
108
+ ### Output
109
+
110
+ ```
111
+ <s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
112
+ [INST]Hi. I would like to make a hotel booking.[/INST]
113
+ Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
114
+ What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
115
+ ```
116
+
117
+ # Performance
118
+
119
+ At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
120
+
121
+ ## MT-Bench
122
+
123
+ MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
124
+
125
+
126
+ | Model | First Turn | Second Turn | Average |
127
+ | --- | --- | --- | --- |
128
+ | GPT-4* | 8.95625 | 9.02500 | 8.990625 |
129
+ | TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
130
+ | Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
131
+ | GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
132
+
133
+ *values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena
134
+
135
+ ![hexplot.png](assets/hexplot.png)
136
+
137
+ # Limitations
138
+
139
+ TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
140
+
141
+ # License
142
+
143
+ TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
144
+
145
+ # Citation
146
+
147
+ If you use TenyxChat-8x7B-v1 for your research, cite us as
148
+
149
+ ```
150
+ @misc{tenyxchat2024,
151
+ title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
152
+ author={Tenyx},
153
+ year={2024},
154
+ }
155
+ ```
156
+