Update README.md
Browse filesfix the link to config
README.md
CHANGED
@@ -58,7 +58,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
58 |
|
59 |
### Training Procedure
|
60 |
|
61 |
-
Models are pre-trained on the aforementioned dataset in mixed-precision (FP16), optimized with Adam, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository](https://github.com/Stability-AI/StableLM
|
62 |
|
63 |
## Use and Limitations
|
64 |
|
|
|
58 |
|
59 |
### Training Procedure
|
60 |
|
61 |
+
Models are pre-trained on the aforementioned dataset in mixed-precision (FP16), optimized with Adam, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-base-alpha-3b.yaml).
|
62 |
|
63 |
## Use and Limitations
|
64 |
|