Spaces:
Running
Running
Updating readme
Browse files
README.md
CHANGED
|
@@ -18,21 +18,6 @@ QuantLM 4-bit: FloatLM LLMs Quantized to 4-bits.
|
|
| 18 |
QuantLM 3-bit: FloatLM LLMs Quantized to 3-bits.
|
| 19 |
All models are released in unpacked (FP16 format) - compatible with FP16 GEMMs across any library supporting the LLaMa architecture.
|
| 20 |
|
| 21 |
-
## Usage:
|
| 22 |
-
|
| 23 |
-
```python
|
| 24 |
-
import transformers as tf, torch
|
| 25 |
-
|
| 26 |
-
# Please select the model you wish to run.
|
| 27 |
-
model_name = "SpectraSuite/TriLM_3.9B_Unpacked"
|
| 28 |
-
|
| 29 |
-
# Please adjust the temperature, repetition penalty, top_k, top_p and other sampling parameters according to your needs.
|
| 30 |
-
pipeline = tf.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device_map="auto")
|
| 31 |
-
|
| 32 |
-
# These are base (pretrained) LLMs that are not instruction and chat tuned. You may need to adjust your prompt accordingly.
|
| 33 |
-
pipeline("Once upon a time")
|
| 34 |
-
```
|
| 35 |
-
|
| 36 |
## Citation
|
| 37 |
If you find these models or the associated paper useful, please cite the paper:
|
| 38 |
|
|
|
|
| 18 |
QuantLM 3-bit: FloatLM LLMs Quantized to 3-bits.
|
| 19 |
All models are released in unpacked (FP16 format) - compatible with FP16 GEMMs across any library supporting the LLaMa architecture.
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
## Citation
|
| 22 |
If you find these models or the associated paper useful, please cite the paper:
|
| 23 |
|