Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.1
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
# SliceX AI™ ELM Turbo
|
7 |
+
**ELM** (which stands for **E**fficient **L**anguage **M**odels) **Turbo** is the next generation model in the series of cutting-edge language models from [SliceX AI](https://slicex.ai) that is designed to achieve the best in class performance in terms of _quality_, _throughput_ & _memory_.
|
8 |
+
|
9 |
+
<div align="center">
|
10 |
+
<img src="https://raw.githubusercontent.com/slicex-ai/elm-turbo/main/elm-turbo-training.png" width="768"/>
|
11 |
+
</div>
|
12 |
+
|
13 |
+
ELM is designed to be a modular and customizable family of neural networks that are highly efficient and performant. Today we are sharing the second version in this series: **ELM Turbo** models (named _Starfruit_).
|
14 |
+
|
15 |
+
_Model:_ ELM Turbo introduces a more _adaptable_, _decomposable LLM architecture_ thereby yielding flexibility in (de)-composing LLM models into smaller stand-alone slices. In comparison to our previous version, the new architecture allows for more powerful model slices to be learnt during the training process (yielding better quality & higher generative capacity) and a higher level of control wrt LLM efficiency - fine-grained slices to produce varying LLM model sizes (depending on the user/task needs and deployment criteria, i.e., Cloud or Edge device constraints).
|
16 |
+
|
17 |
+
_Training:_ ELM Turbo introduces algorithmic optimizations that allows us to train a single model but once trained the ELM Turbo model can be sliced in many ways to fit different user/task needs. We formulate the entire training procedure for ELM Turbo as a _continual learning process_ during which we apply **"slicing"** operations & corresponding optimizations during the pre-training and/or fine-tuning stage. In a nutshell, this procedure _teaches the model to learn & compress its knowledge into smaller slices_.
|
18 |
+
|
19 |
+
_Fast Inference with Customization:_ As with our previous version, once trained, ELM Turbo model architecture permits flexible inference strategies at runtime depending on deployment & device constraints to allow users to make optimal compute/memory tradeoff choices for their application needs. In addition to the blazing fast speeds achieved by native ELM Turbo slice optimization, we also layered in NVIDIA's TensorRT-LLM integration to get further speedups. The end result 👉 optimized ELM Turbo models that achieve one of the world's best LLM performance.
|
20 |
+
|
21 |
+
- **Blog:** [Medium](https://medium.com/sujith-ravi/introducing-elm-turbo-next-generation-efficient-decomposable-llms-a2347bd08676)
|
22 |
+
|
23 |
+
- **Github:** https://github.com/slicex-ai/elm-turbo
|
24 |
+
|
25 |
+
- **HuggingFace** (access ELM Turbo Models in HF): 👉 [here](https://huggingface.co/collections/slicexai/llama31-elm-turbo-66a81aa5f6bcb0b775ba5dd7)
|
26 |
+
|
27 |
+
## ELM Turbo Model Release (version for sliced Llama 3.1)
|
28 |
+
In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
|
29 |
+
|
30 |
+
**NOTE**: The open-source datasets from the HuggingFace hub used for instruction fine-tuning ELM Turbo include, but are not limited to: `allenai/tulu-v2-sft-mixture`, `microsoft/orca-math-word-problems-200k`, `mlabonne/WizardLM_evol_instruct_70k-ShareGPT`, and `mlabonne/WizardLM_evol_instruct_v2_196K-ShareGPT`. We advise users to exercise caution when utilizing ELM Turbo, as these datasets may contain factually incorrect information, unintended biases, inappropriate content, and other potential issues. It is recommended to thoroughly evaluate the model's outputs and implement appropriate safeguards for your specific use case.
|
31 |
+
|
32 |
+
|
33 |
+
## 1. Running ELM Turbo via Nvidia's TensorRT-LLM
|
34 |
+
|
35 |
+
- **[Cloud AI]** If you are using A100 or H100 GPUs, you can utilize our pre-built ELM Turbo-TRTLLM engines. Below are the instructions to install and run them.
|
36 |
+
|
37 |
+
- Additionally, you can build your own TRTLLM engines by following the instructions provided in [Section (c)](https://github.com/slicex-ai/elm-turbo/blob/main/README.md#c-optional-create--run-your-own-elm-turbo-trtllm-engines-from-elm-turbo-huggingfacehf-checkpoints) below.
|
38 |
+
|
39 |
+
- **[Edge AI]** To run on edge (Windows RTX), follow the instructions provided by Nvidia in their TRT-LLM documentation: [Windows README](https://github.com/NVIDIA/TensorRT-LLM/blob/main/windows/README.md).
|
40 |
+
|
41 |
+
|
42 |
+
### (a) Download & install Nvidia's TensorRT-LLM with docker.
|
43 |
+
The following commands create a Docker container named `elm_trtllm` and install TensorRT-LLM. If you encounter any installation errors related to TensorRT-LLM, please refer to the troubleshooting section [here](https://nvidia.github.io/TensorRT-LLM/reference/troubleshooting.html).
|
44 |
+
```
|
45 |
+
git clone https://github.com/slicex-ai/elm-turbo.git
|
46 |
+
cd elm-turbo
|
47 |
+
sh setup_trtllm.sh
|
48 |
+
```
|
49 |
+
This creates a docker named `elm_trtllm` and installs tensorrt_llm.
|
50 |
+
|
51 |
+
### (b) Run pre-built ELM Turbo-trtllm engines with your input prompts.
|
52 |
+
|
53 |
+
Example: To run our pre-built trt-engine for `slicexai/Llama3.1-elm-turbo-6B-instruct` on A100 & H100 gpus respectively,
|
54 |
+
```
|
55 |
+
docker attach elm_trtllm
|
56 |
+
cd /lm
|
57 |
+
sh run_llama_elm_turbo_trtllm_engine.sh slicexai/Llama3.1-elm-turbo-6B-instruct A100 "plan a fun day with my grandparents."
|
58 |
+
sh run_llama_elm_turbo_trtllm_engine.sh slicexai/Llama3.1-elm-turbo-6B-instruct H100 "plan a fun day with my grandparents."
|
59 |
+
```
|
60 |
+
|
61 |
+
Detailed instructions to run the engine:
|
62 |
+
```
|
63 |
+
Usage: sh run_llama_elm_turbo_trtllm_engine.sh <elm_turbo_model_id> <gpu_type> "<input_prompt>"
|
64 |
+
Supported elm-turbo_model_id choices : [slicexai/Llama3.1-elm-turbo-6B-instruct, slicexai/Llama3.1-elm-turbo-4B-instruct, slicexai/Llama3.1-elm-turbo-3B-instruct]
|
65 |
+
Supported gpu_types : [A100, H100]
|
66 |
+
```
|
67 |
+
|
68 |
+
|
69 |
+
### (c) (Optional) Create & run your own ELM Turbo-trtllm engines from ELM Turbo Huggingface(HF) checkpoints.
|
70 |
+
|
71 |
+
#### Compile the Model into a TensorRT-LLM Engine
|
72 |
+
To build an elm-turbo `slicexai/Llama3.1-elm-turbo-6B-instruct` tensortrt_llm engine with INT-8 quantization, follow the instructions below. For more detailed configurations, refer to the Llama conversion instructions provided by NVIDIA [here](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/llama).
|
73 |
+
|
74 |
+
```bash
|
75 |
+
docker attach elm_trtllm
|
76 |
+
cd /lm/TensorRT-LLM/examples/llama
|
77 |
+
huggingface-cli download slicexai/Llama3.1-elm-turbo-6B-instruct --local-dir ../slicexai/Llama3.1-elm-turbo-6B-instruct
|
78 |
+
python3 convert_checkpoint.py --dtype bfloat16 --use_weight_only --weight_only_precision int8 --model_dir ../slicexai/Llama3.1-elm-turbo-6B-instruct --output_dir ../slicexai/Llama3.1-elm-turbo-6B-instruct-trtllm-ckpt
|
79 |
+
trtllm-build --gpt_attention_plugin bfloat16 --gemm_plugin bfloat16 --max_seq_len 4096 --max_batch_size 256 --checkpoint_dir ../slicexai/Llama3.1-elm-turbo-6B-instruct-trtllm-ckpt --output_dir ../slicexai/Llama3.1-elm-turbo-6B-instruct-trtllm-engine
|
80 |
+
```
|
81 |
+
|
82 |
+
#### Run the Model
|
83 |
+
Now that you’ve got your model engine, it's time to run it.
|
84 |
+
|
85 |
+
```bash
|
86 |
+
python3 ../run.py \
|
87 |
+
--engine_dir ../slicexai/Llama3.1-elm-turbo-6B-instruct-trtllm-engine \
|
88 |
+
--max_output_len 512 \
|
89 |
+
--presence_penalty 0.7 \
|
90 |
+
--frequency_penalty 0.7 \
|
91 |
+
--tokenizer_dir ../slicexai/Llama3.1-elm-turbo-6B-instruct \
|
92 |
+
--input_text """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
|
93 |
+
|
94 |
+
plan a fun day with my grandparents.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
95 |
+
|
96 |
+
"""
|
97 |
+
```
|