casperhansen commited on
Commit
8ef5699
1 Parent(s): d81efc2

Delete README copy.md

Browse files
Files changed (1) hide show
  1. README copy.md +0 -207
README copy.md DELETED
@@ -1,207 +0,0 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- datasets:
4
- - camel-ai/code
5
- - ehartford/wizard_vicuna_70k_unfiltered
6
- - anon8231489123/ShareGPT_Vicuna_unfiltered
7
- - teknium1/GPTeacher/roleplay-instruct-v2-final
8
- - teknium1/GPTeacher/codegen-isntruct
9
- - timdettmers/openassistant-guanaco
10
- - camel-ai/math
11
- - project-baize/baize-chatbot/medical_chat_data
12
- - project-baize/baize-chatbot/quora_chat_data
13
- - project-baize/baize-chatbot/stackoverflow_chat_data
14
- - camel-ai/biology
15
- - camel-ai/chemistry
16
- - camel-ai/ai_society
17
- - jondurbin/airoboros-gpt4-1.2
18
- - LongConversations
19
- - camel-ai/physics
20
- tags:
21
- - Composer
22
- - MosaicML
23
- - llm-foundry
24
- inference: false
25
- ---
26
-
27
- # MPT-7B-Chat-8k
28
-
29
- MPT-7B-Chat-8k is a chatbot-like model for dialogue generation.
30
- It was built by finetuning [MPT-7B-8k](https://huggingface.co/mosaicml/mpt-7b-8k) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
31
- [GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
32
- This is the same dataset that [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat) was trained on.
33
- * License: _CC-By-NC-SA-4.0_ (non-commercial use only)
34
-
35
- This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
36
-
37
- ## Model Date
38
-
39
- July 18, 2023
40
-
41
- ## Model License
42
-
43
- _CC-By-NC-SA-4.0_ (non-commercial use only)
44
-
45
- ## Documentation
46
-
47
- * [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
48
- * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
49
- * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
50
-
51
- ## How to Use
52
-
53
- This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
54
-
55
- ```python
56
- import transformers
57
- model = transformers.AutoModelForCausalLM.from_pretrained(
58
- 'mosaicml/mpt-7b-chat-8k',
59
- trust_remote_code=True
60
- )
61
- ```
62
- Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
63
- This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
64
- `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
65
-
66
- To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
67
- ```python
68
- import torch
69
- import transformers
70
-
71
- name = 'mosaicml/mpt-7b-chat-8k'
72
-
73
- config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
74
- config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
75
- config.init_device = 'cuda:0' # For fast initialization directly on GPU!
76
-
77
- model = transformers.AutoModelForCausalLM.from_pretrained(
78
- name,
79
- config=config,
80
- torch_dtype=torch.bfloat16, # Load model weights in bfloat16
81
- trust_remote_code=True
82
- )
83
- ```
84
-
85
- The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
86
-
87
- ```python
88
- import transformers
89
-
90
- name = 'mosaicml/mpt-7b-chat-8k'
91
-
92
- config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
93
- config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
94
-
95
- model = transformers.AutoModelForCausalLM.from_pretrained(
96
- name,
97
- config=config,
98
- trust_remote_code=True
99
- )
100
- ```
101
-
102
- This model was trained with the MPT-7B-chat tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional ChatML tokens.
103
-
104
- ```python
105
- from transformers import AutoTokenizer
106
- tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
107
- ```
108
-
109
- The model can then be used, for example, within a text-generation pipeline.
110
- Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
111
-
112
- ```python
113
- from transformers import pipeline
114
-
115
- with torch.autocast('cuda', dtype=torch.bfloat16):
116
- inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
117
- outputs = model.generate(**inputs, max_new_tokens=100)
118
- print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
119
-
120
- # or using the HF pipeline
121
- pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
122
- with torch.autocast('cuda', dtype=torch.bfloat16):
123
- print(
124
- pipe('Here is a recipe for vegan banana bread:\n',
125
- max_new_tokens=100,
126
- do_sample=True,
127
- use_cache=True))
128
- ```
129
-
130
- ## Model Description
131
-
132
- The architecture is a modification of a standard decoder-only transformer.
133
-
134
- The model has been modified from a standard transformer in the following ways:
135
- * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
136
- * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
137
- * It does not use biases
138
-
139
-
140
- | Hyperparameter | Value |
141
- |----------------|-------|
142
- |n_parameters | 6.7B |
143
- |n_layers | 32 |
144
- | n_heads | 32 |
145
- | d_model | 4096 |
146
- | vocab size | 50432 |
147
- | sequence length | 2048 |
148
-
149
- ## Data Mix
150
-
151
- The model was trained on the following data mix:
152
-
153
- | Data Source | Number of Tokens in Source | Proportion |
154
- |-------------|----------------------------|------------|
155
- | Airoboros/GPT4-1.2 | 26.4M | 1.71% |
156
- | Baize | 55.0M | 3.57% |
157
- | Camel | 301M | 19.54% |
158
- | GPTeacher | 7.56M | 0.49% |
159
- | Guanaco | 15.6M | 1.02% |
160
- | LongCoversations | 18.4M | 1.19% |
161
- | ShareGPT | 821M | 53.24% |
162
- | WizardLM | 297M | 19.23% |
163
-
164
- "LongConversations" is a GPT3.5/4-generated dataset, details of which will be released at a later date.
165
-
166
- ### Training Configuration
167
-
168
- This model was trained on 192 H100s for about 48 minutes using the [MosaicML Platform](https://www.mosaicml.com/platform).
169
- The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
170
-
171
- ## Limitations and Biases
172
-
173
- _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
174
-
175
- MPT-7B-Chat-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
176
- MPT-7B-Chat-8k was trained on various public datasets.
177
- While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
178
-
179
- ## Acknowledgements
180
-
181
- This model was finetuned by the MosaicML NLP team
182
-
183
- ## Disclaimer
184
-
185
- The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
186
-
187
-
188
- ## MosaicML Platform
189
-
190
- If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
191
-
192
-
193
- ## Citation
194
-
195
- Please cite this model using the following format:
196
-
197
- ```
198
- @online{MosaicML2023Introducing,
199
- author = {MosaicML NLP Team},
200
- title = {Introducing MPT-30B: Raising the bar
201
- for open-source foundation models},
202
- year = {2023},
203
- url = {www.mosaicml.com/blog/mpt-30b},
204
- note = {Accessed: 2023-06-22},
205
- urldate = {2023-06-22}
206
- }
207
- ```