Cxxs commited on
Commit
6f248db
·
1 Parent(s): 5d6bb55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -172
README.md CHANGED
@@ -2,183 +2,21 @@
2
  license: mit
3
  ---
4
  # 🔥 MoE-Mixtral-7B-8Expert
5
- <p align="center">
6
- <img src="logo.png" width="90%"/>
7
- <br>
8
- </p>
9
- <p align="left">
10
- LLaMA2-Accessory link: <a href="https://github.com/Alpha-VLLM/LLaMA2-Accessory" target="_blank">Github</a>
11
- </p>
12
 
13
- [mixtral-8x7b](https://huggingface.co/someone13574/mixtral-8x7b-32kseqlen) is a Mixture-of-Expert (MoE) model. In this
14
- tutorial, we will introduce how to inference with and to finetune the model.
15
-
16
- ## Features
17
  With LLaMA2-Accessory, mixtral-8x7b enjoys the following features:
18
  1. Distributed MoE (namely instantiating experts on multiple processes/gpus)
19
  2. Load Balancing Loss
20
  3. Tensor Parallel and FSDP for efficiently training
21
-
22
  4. Distributed and/or quantized inference
23
 
 
 
 
 
 
24
 
25
- ## Install
26
- Please follow the [instructions here](https://llama2-accessory.readthedocs.io/en/latest/install.html) to install
27
- LLaMA2-Accessory, which is an easy-to-use and comprehensive toolkit for LLM development.
28
-
29
- ## Prepare Checkpoint
30
- Given the official mixtral-8x7b checkpoints, a step of format conversion is needed to make them usable by
31
- LLaMA2-Accessory. We have released the off-the-shelf converted checkpoints. Alternatively, you can convert them
32
- by yourself according to the following guides.
33
- ### A. Download Converted Checkpoints
34
- The converted checkpoints are released at [HuggingFace](https://huggingface.co/Alpha-VLLM/MoE-Mixtral-7B-8Expert/tree/main/converted),
35
- please download all files in the folder to your machine.
36
- ### B. Convert by Yourself
37
-
38
- #### 1. prepare the original checkpoints
39
- The original checkpoints are available at https://huggingface.co/someone13574/mixtral-8x7b-32kseqlen, please first
40
- download the 10 splits and then cat them into one follow the official guides. After this step, you should have the
41
- `consolidated.00.pth` file.
42
-
43
- #### 2. convert
44
-
45
- Downlaod the [split.py](https://huggingface.co/Alpha-VLLM/MoE-Mixtral-7B-8Expert/blob/main/converted/split.py) script and *put it in the same directory as `consolidated.00.pth`*. Run the following
46
- command to conduct conversion:
47
- ```bash
48
- python split.py
49
- ```
50
- After running, you should see a folder named `converted` created, with eight `consolidated.**-of-08.model.pth` files
51
- therein.
52
-
53
- #### 3. prepare other resources
54
- Finally, please download the following three files from [our HuggingFace repo](https://huggingface.co/Alpha-VLLM/MoE-Mixtral-7B-8Expert/tree/main/converted):
55
- ```bash
56
- config.json
57
- meta.json
58
- tokenizer.model
59
- ```
60
- and put them under the `converted` directory, next to the weight files you obtained in the previous step.
61
-
62
-
63
- ## Inference
64
- ### Simple Inference
65
- You can run inference on 8, 4, 2, or 1 GPUs. With tensor parallel and distributed MoE, the more GPUs you use, the
66
- less memory and computation load exists on each individual GPU. The following code exemplifies the inference process.
67
- ```python
68
- from accessory.model.meta import MetaModel
69
-
70
- import random
71
- import numpy as np
72
-
73
- import torch
74
- import torch.distributed as dist
75
- import multiprocessing as mp
76
-
77
- def main(world_size, rank) -> None:
78
- # specify random seed to ensure consistent token sampling among model parallel ranks
79
- random.seed(0)
80
- torch.random.manual_seed(0)
81
- np.random.seed(0)
82
-
83
- dist.init_process_group(
84
- backend="nccl", rank=rank, world_size=world_size,
85
- init_method=f"tcp://127.0.0.1:23560",
86
- )
87
- torch.cuda.set_device(rank)
88
-
89
- # mp_group identifies which ranks will work collaboratively through model parallelism
90
- model = MetaModel.from_pretrained("/path/to/converted", max_seq_len=2048,
91
- mp_group=dist.new_group(ranks=list(range(dist.get_world_size()))))
92
-
93
- prompt = "The best programming language in the world is"
94
-
95
- response = model.generate([prompt], images=None, max_gen_len=512)[0]
96
- print(response)
97
- # or if you want to generate the response token by token
98
- response = None
99
- for response_in_progress in model.stream_generate(prompt, image=None, max_gen_len=512):
100
- response = response_in_progress['text']
101
- if rank == 0: # without this filter, the response will be printed for `world_size` times
102
- print(response)
103
-
104
-
105
- if __name__ == "__main__":
106
- N_GPU = 8 # 1, 2, 4, or 8
107
- if N_GPU == 1:
108
- main(world_size=1, rank=0)
109
- elif N_GPU > 1:
110
- # You can use whatever method, e.g. torchrun, slurm, etc. for distributed launch
111
- # Just be sure to initialize torch distributed (by invoking dist.init_process_group)
112
- # before creating the model if model parallel size > 1 is used
113
- mp.set_start_method("spawn")
114
- for rank in range(N_GPU):
115
- process = mp.Process(target=main, args=(N_GPU, rank))
116
- process.start()
117
- else:
118
- raise ValueError
119
- ```
120
-
121
- A thorough tutorial over the inference with LLaMA2-Accessory can be found in the
122
- [document](https://llama2-accessory-temp.readthedocs.io/en/latest/inference.html).
123
-
124
- ### Host Local Demo
125
- LLaMA2-Accessory provides a series of gradio demos for efficient interaction with your model. To host a local demo
126
- for the pretrained mixtral-8x7b model, follow the steps below:
127
- ```bash
128
- cd LLaMA2-Accessory/accessory
129
- torchrun --nproc-per-node=$N_GPUS_TO_USE --master-port=$PORT demos/single_turn.py \
130
- --pretrained_path $PATH_TO_CONVERTED
131
- ```
132
- As we have mentioned in the [Simple Inference](#simple-inference) section, `$N-GPUS-TO-USE` can be 1, 2, 4, or 8.
133
- `$PATH_TO_CONVERTED` should be the directory containing the converted checkpoints, and `$PORT` can be any free port.
134
-
135
-
136
- ## Finetuning
137
- LLaMA2-Accessory supports both full-parameter and parameter-efficient finetuning of mixtral-8x7b. It also
138
- supports the load balancing regularization loss. More advanced MoE support will come soon.
139
-
140
- ### Data
141
- We use the following datasets to exemplify finetuning:
142
- + [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
143
- + [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
144
-
145
- The two files are referred to by the [dialog_ultrachat200kWizardcode.yaml](https://github.com/Alpha-VLLM/LLaMA2-Accessory/accessory/configs/data/finetune/sg/dialog_ultrachat200kWizardcode.yaml)
146
- file, which is then used by the `*.sh` experiments shown below to define the data for fientuning. Note that the data need
147
- to be processed to match the format usable by LLaMA2-Accessory. For convenience, we provide the processed data files for
148
- [💾evol-codealpaca-v1](https://huggingface.co/Alpha-VLLM/LLaMA2-Accessory/data/evol-codealpaca-v1/wizardCode.json) and
149
- [💾ultrachat_200k](https://huggingface.co/Alpha-VLLM/LLaMA2-Accessory/data/ultrachat_200k_train_sft.json).
150
- Please move them to the position specified by `dialog_ultrachat200kWizardcode.yaml`
151
-
152
-
153
- ### Full Finetune
154
- ```bash
155
- cd LLaMA2-Accessory/accessory
156
- srun -n32 --gres=gpu:8 --ntasks-per-node=8 bash \
157
- exps/finetune/sg/dialog_ultrachat200kWizardcode_mistral.sh \
158
- /path/to/converted/mixtral-8x7b-32kseqlen \
159
- /path/to/converted/mixtral-8x7b-32kseqlen/config.json \
160
- /path/to/converted/mixtral-8x7b-32kseqlen/tokenizer.model
161
- ```
162
- ### PEFT
163
- ```bash
164
- cd LLaMA2-Accessory/accessory
165
- srun -n16 --gres=gpu:8 --ntasks-per-node=8 bash \
166
- exps/finetune/sg/dialog_ultrachat200kWizardcode_mistralPeft.sh \
167
- /path/to/converted/mixtral-8x7b-32kseqlen \
168
- /path/to/converted/mixtral-8x7b-32kseqlen/config.json \
169
- /path/to/converted/mixtral-8x7b-32kseqlen/tokenizer.model
170
- ```
171
-
172
- **Finetuned Model Release:**
173
-
174
- + [🤗checkpoint](https://huggingface.co/Alpha-VLLM/MoE-Mixtral-7B-8Expert/tree/main/finetuned/peft)
175
-
176
- **Host Local Demo**
177
- ```bash
178
- cd LLaMA2-Accessory/accessory
179
- python demos/multi_turn.py --n_gpus $N_GPUS_TO_USE --pretrained_path $PATH_TO_FINETUNED
180
- ```
181
-
182
- See the LLaMA2-Accessory [document](https://llama2-accessory.readthedocs.io/en/latest/) to know more about
183
- [finetuning](https://llama2-accessory.readthedocs.io/en/latest/finetune/index.html)
184
- and [inference](https://llama2-accessory-temp.readthedocs.io/en/latest/inference.html).
 
2
  license: mit
3
  ---
4
  # 🔥 MoE-Mixtral-7B-8Expert
5
+ [mixtral-8x7b](https://huggingface.co/someone13574/mixtral-8x7b-32kseqlen) is a Mixture-of-Expert (MoE) model.
6
+ [LLaMA2-Accessory](https://github.com/Alpha-VLLM/LLaMA2-Accessory) has supported its inference and finetuning.
 
 
 
 
 
7
 
8
+ ## 🚀 Features
 
 
 
9
  With LLaMA2-Accessory, mixtral-8x7b enjoys the following features:
10
  1. Distributed MoE (namely instantiating experts on multiple processes/gpus)
11
  2. Load Balancing Loss
12
  3. Tensor Parallel and FSDP for efficiently training
 
13
  4. Distributed and/or quantized inference
14
 
15
+ ## 🔥 Online Demo
16
+ We host a web demo at <https://dfc02190724c71dd5b.gradio.live/>, which shows a mixtral-8x7b model finetuned on
17
+ [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) and
18
+ [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k), with LoRA and Bias tuning.
19
+ Please note that this is a temporary link, and we will update our official permanent link today.
20
 
21
+ ## 💡 Tutorial
22
+ A detailed tutorial is available at <https://llama2-accessory.readthedocs.io/en/latest/projects/mixtral-8x7b.html#>