Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,37 @@
|
|
|
|
|
|
|
|
1 |
# SliceX AI™ ELM (Efficient Language Models)
|
2 |
-
|
3 |
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- news_summarization
|
6 |
|
7 |
-
|
|
|
|
|
8 |
```bash
|
|
|
9 |
sudo apt-get intall git-lfs
|
10 |
git lfs install
|
11 |
-
git clone [email protected]:slicexai/elm-v0.1_news_summarization
|
12 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
(Optional) Installing git-lfs without sudo,
|
14 |
```bash
|
15 |
wget https://github.com/git-lfs/git-lfs/releases/download/v3.2.0/git-lfs-linux-amd64-v3.2.0.tar.gz
|
@@ -19,12 +41,6 @@ git lfs install
|
|
19 |
```
|
20 |
|
21 |
|
22 |
-
## Installation
|
23 |
-
```bash
|
24 |
-
cd elm-v0.1_news_summarization
|
25 |
-
pip install -r requirements.txt
|
26 |
-
```
|
27 |
-
|
28 |
## How to use - Run ELM on a sample task
|
29 |
```bash
|
30 |
python run.py <elm-model-directory>
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
# SliceX AI™ ELM (Efficient Language Models)
|
5 |
+
**ELM** (which stands for **E**fficient **L**anguage **M**odels) is the first version in the series of cutting-edge language models from [SliceX AI](https://slicex.ai) that is designed to achieve the best in class performance in terms of _quality_, _throughput_ & _memory_.
|
6 |
|
7 |
+
<div align="center">
|
8 |
+
<img src="elm-rambutan.png" width="256"/>
|
9 |
+
</div>
|
10 |
+
|
11 |
+
ELM is designed to be a modular and customizable family of neural networks that are highly efficient and performant. Today we are sharing the first version in this series: **ELM-v0.1** models.
|
12 |
+
|
13 |
+
_Model:_ ELM introduces a new type of _(de)-composable LLM model architecture_ along with the algorithmic optimizations required to learn (training) and run (inference) these models. At a high level, we train a single ELM model in a self-supervised manner (during pre-training phase) but once trained the ELM model can be sliced in many ways to fit different user/task needs. The optimizations can be applied to the model either during the pre-training and/or fine-tuning stage.
|
14 |
+
|
15 |
+
_Fast Inference with Customization:_ Once trained, the ELM model architecture permits flexible inference strategies at runtime depending on the deployment needs. For instance, the ELM model can be _decomposed_ into smaller slices, i.e., smaller (or larger) models can be extracted from the original model to create multiple inference endpoints. Alternatively, the original (single) ELM model can be loaded _as is_ for inference and different slices within the model can be queried directly to power faster inference. This provides an additional level of flexibility for users to make compute/memory tradeoffs depending on their application and runtime needs.
|
16 |
+
|
17 |
+
## ELM-v0.1 Model Release
|
18 |
+
Models are located in the `models` folder. ELM models in this repository comes in two sizes (elm-1.0 and elm-0.75) and supports the following use-case.
|
19 |
- news_summarization
|
20 |
|
21 |
+
|
22 |
+
## Setup ELM
|
23 |
+
### Download ELM repo
|
24 |
```bash
|
25 |
+
git clone [email protected]:slicexai/elm-v0.1
|
26 |
sudo apt-get intall git-lfs
|
27 |
git lfs install
|
|
|
28 |
```
|
29 |
+
### Installation
|
30 |
+
```bash
|
31 |
+
cd elm-v0.1
|
32 |
+
pip install -r requirements.txt
|
33 |
+
```
|
34 |
+
|
35 |
(Optional) Installing git-lfs without sudo,
|
36 |
```bash
|
37 |
wget https://github.com/git-lfs/git-lfs/releases/download/v3.2.0/git-lfs-linux-amd64-v3.2.0.tar.gz
|
|
|
41 |
```
|
42 |
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
## How to use - Run ELM on a sample task
|
45 |
```bash
|
46 |
python run.py <elm-model-directory>
|