File size: 2,043 Bytes
9ef89a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
## Installation Instructions

As a pre-requisite, make sure you have [ducttape](https://github.com/CoderPat/ducttape) and [(mini)conda](https://docs.conda.io/en/latest/miniconda.html) installed.

First, clone this repository.

Then, to create a new conda environment with all the necessary dependencies, run the following command:

```bash
export CONDA_HOME="/path/to/(mini)conda3"
bash setup/conda.sh
```

# Training

## Data format

Before training, you must preprocess the training data. Before preprocessing, the data should be a `json` file, with the following format:
```json
{"text": "<instance_0_text>"}
{"text": "<instance_1_text>"}
```
Note that the preprocessing script will pack observations together in vectors of a specified length, and will separate each instance (json line) by the tokenizer's EOS token.

Then, run the bash scripts in this order:

```bash
./preprocess_data.sh [OPTIONS]
./convert2megatron.sh [OPTIONS]
./model_sharding.sh [OPTIONS]
./continue_pretraining.sh [OPTIONS]
```
>NOTE: each of these commands may be run with flag `--help`, which will inform the user on how to use each argument.

For example, for a continued pretraining run with Llama 2 7B on datasets `d1` and `d2` and 8 GPUs, run the following:

```bash
> ./preprocess_data.sh --dataset_json=<path_to_d1> --dataset_bin=<d1_output_path> --vocab_file=<path_to_hf_model>/tokenizer.model --repo=<path_to_repo>
> ./preprocess_data.sh --dataset_json=<path_to_d2> --dataset_bin=<d2_output_path> --vocab_file=<path_to_hf_model>/tokenizer.model --repo=<path_to_repo>
> ./convert2megatron.sh --megatron_model=<megatron_model_path> --model_path=<path_to_hf_model> --size=7 --repo=<path_to_repo>
> ./model_sharding.sh --megatron_model=<megatron_model_path> --sharded_model=<sharded_model_path> --tp=8 --pp=1 --vocab_size=32000 --repo=<path_to_repo>
> ./continue_pretraining.sh --data_path="1 d1 1 d2" --megatron_model=<sharded_model_path> --model_dir=<checkpoint_save_dir> --tokenizer_path=<path_to_hf_model>/tokenizer.model --tp=8 --pp=1 [TRAINING_ARGS]
```