Update README.md
Browse files
README.md
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
-
---
|
2 |
-
base_model: GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test
|
3 |
-
license: apache-2.0
|
4 |
-
tags:
|
5 |
-
- mlx
|
6 |
-
---
|
7 |
|
8 |
# GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx
|
9 |
|
10 |
This quantized low-bit model [GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx) was converted to MLX format from [`GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test`](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test) using gbx-lm version **0.3.4**.
|
11 |
-
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0
|
12 |
|
13 |
## Use with mlx
|
14 |
|
|
|
1 |
+
---
|
2 |
+
base_model: GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test
|
3 |
+
license: apache-2.0
|
4 |
+
tags:
|
5 |
+
- mlx
|
6 |
+
---
|
7 |
|
8 |
# GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx
|
9 |
|
10 |
This quantized low-bit model [GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx) was converted to MLX format from [`GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test`](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test) using gbx-lm version **0.3.4**.
|
11 |
+
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0) for more details on the model.
|
12 |
|
13 |
## Use with mlx
|
14 |
|