File size: 3,649 Bytes
12001a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
## Downloading pretrained weights
Except for when you are training from scratch, you will need the pretrained weights from Meta.
### Original Meta weights
Download the model weights following the instructions on the official [LLaMA repository](https://github.com/facebookresearch/llama).
Once downloaded, you should have a folder like this:
```text
checkpoints/llama
βββ 7B
β βββ ...
β βββ consolidated.00.pth
βββ 13B
β ...
βββ tokenizer.model
```
Convert the weights to the Lit-LLaMA format:
```bash
python scripts/convert_checkpoint.py --model_size 7B
```
> **Note**
> All scripts support argument [customization](customize_paths.md)
### OpenLLaMA
OpenLM Research has released **Apache 2.0 licensed** weights obtained by training LLaMA on the 1.2 trillion token open-source [RedPajama](https://github.com/togethercomputer/RedPajama-Data) dataset.
Weights were released in preview on intermediate number of tokens (200B, 300B at the time of writing). In order to get them do:
```bash
# Make sure you have git-lfs installed (https://git-lfs.com): git lfs install
git clone https://huggingface.co/openlm-research/open_llama_7b_preview_300bt checkpoints/open-llama/7B
```
Or if you don't have `git-lfs` installed:
```bash
python scripts/download.py --repo_id openlm-research/open_llama_7b_preview_300bt --local_dir checkpoints/open-llama/7B
```
Once downloaded, you should have a folder like this:
```text
checkpoints/open-llama/
βββ 7B
βββ open_llama_7b_preview_300bt_transformers_weights
βββ ...
βββ pytorch_model-00001-of-00002.bin
βββ pytorch_model-00002-of-00002.bin
βββ pytorch_model.bin.index.json
βββ tokenizer.model
```
Convert the weights to the Lit-LLaMA format:
```bash
python scripts/convert_hf_checkpoint.py --checkpoint_dir checkpoints/open-llama/7B/open_llama_7b_preview_300bt_transformers_weights --model_size 7B
```
> **Note**
> All scripts support argument [customization](customize_paths.md)
Once converted, you should have a folder like this:
```text
checkpoints/lit-llama/
βββ 7B
β βββ lit-llama.pth
βββ tokenizer.model
```
You are all set. Now you can continue with inference or finetuning.
Try running [`generate.py` to test the imported weights](inference.md).
### Alternative sources
You might find LLaMA weights hosted online in the HuggingFace hub. Beware that this infringes the original weight's license.
You could try downloading them by running the following command with a specific repo id:
```bash
# Make sure you have git-lfs installed (https://git-lfs.com): git lfs install
git clone REPO_ID checkpoints/hf-llama/7B
```
Or if you don't have `git-lfs` installed:
```bash
python scripts/download.py --repo_id REPO_ID --local_dir checkpoints/hf-llama/7B
```
Once downloaded, you should have a folder like this:
```text
checkpoints/hf-llama/
βββ 7B
βββ ...
βββ pytorch_model-00001-of-00002.bin
βββ pytorch_model-00002-of-00002.bin
βββ pytorch_model.bin.index.json
βββ tokenizer.model
```
Convert the weights to the Lit-LLaMA format:
```bash
python scripts/convert_hf_checkpoint.py --model_size 7B
```
> **Note**
> All scripts support argument [customization](customize_paths.md)
Once converted, you should have a folder like this:
```text
checkpoints/lit-llama/
βββ 7B
β βββ lit-llama.pth
βββ tokenizer.model
```
You are all set. Now you can continue with inference or finetuning.
Try running [`generate.py` to test the imported weights](inference.md).
|