Update README.md
Browse filesfix the link of config
README.md
CHANGED
@@ -56,7 +56,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
56 |
|
57 |
### Training Procedure
|
58 |
|
59 |
-
Models are pre-trained on the aforementioned dataset in mixed-precision (FP16), optimized with Adam, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository](https://github.com/Stability-AI/StableLM
|
60 |
|
61 |
## Use and Limitations
|
62 |
|
|
|
56 |
|
57 |
### Training Procedure
|
58 |
|
59 |
+
Models are pre-trained on the aforementioned dataset in mixed-precision (FP16), optimized with Adam, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-base-alpha-7b.yaml).
|
60 |
|
61 |
## Use and Limitations
|
62 |
|