Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
|
8 |
# GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx
|
9 |
|
10 |
-
This quantized low-bit model [GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx) was converted to MLX format from [`GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test`](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0
|
11 |
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0) for more details on the model.
|
12 |
|
13 |
## Use with mlx
|
|
|
7 |
|
8 |
# GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx
|
9 |
|
10 |
+
This quantized low-bit model [GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-mlx) was converted to MLX format from [`GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0-test`](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0) using gbx-lm version **0.3.4**.
|
11 |
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3.1-Nemotron-70B-Instruct-layer-mix-bpw-4.0) for more details on the model.
|
12 |
|
13 |
## Use with mlx
|