Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ _Fast Inference with Customization:_ As with our previous version, once trained,
|
|
24 |
|
25 |
- **HuggingFace** (access ELM Turbo Models in HF): π [here](https://huggingface.co/collections/slicexai/llama31-elm-turbo-66a81aa5f6bcb0b775ba5dd7)
|
26 |
|
27 |
-
## ELM Turbo Model Release (Llama 3.1
|
28 |
In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
|
29 |
|
30 |
- [Section 1.](https://huggingface.co/slicexai/Llama3.1-elm-turbo-4B-instruct#1-run-elm-turbo-models-with-huggingface-transformers-library) π instructions to run ELM-Turbo with the Huggingface Transformers library.
|
|
|
24 |
|
25 |
- **HuggingFace** (access ELM Turbo Models in HF): π [here](https://huggingface.co/collections/slicexai/llama31-elm-turbo-66a81aa5f6bcb0b775ba5dd7)
|
26 |
|
27 |
+
## ELM Turbo Model Release (version for sliced Llama 3.1)
|
28 |
In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
|
29 |
|
30 |
- [Section 1.](https://huggingface.co/slicexai/Llama3.1-elm-turbo-4B-instruct#1-run-elm-turbo-models-with-huggingface-transformers-library) π instructions to run ELM-Turbo with the Huggingface Transformers library.
|