Devarui379
commited on
Commit
•
c98704a
1
Parent(s):
6d29b94
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
-
base_model:
|
|
|
3 |
library_name: transformers
|
4 |
license: llama3.2
|
5 |
tags:
|
@@ -7,12 +8,18 @@ tags:
|
|
7 |
- uncensored
|
8 |
- llama-cpp
|
9 |
- gguf-my-repo
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
# Devarui379/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF
|
13 |
This model was converted to GGUF format
|
14 |
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model.
|
15 |
|
|
|
|
|
16 |
## Use with llama.cpp
|
17 |
Install llama.cpp through brew (works on Mac and Linux)
|
18 |
|
@@ -51,4 +58,4 @@ Step 3: Run inference through the main binary.
|
|
51 |
or
|
52 |
```
|
53 |
./llama-server --hf-repo Devarui379/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -c 2048
|
54 |
-
```
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- meta-llama/Llama-3.2-1B-Instruct
|
4 |
library_name: transformers
|
5 |
license: llama3.2
|
6 |
tags:
|
|
|
8 |
- uncensored
|
9 |
- llama-cpp
|
10 |
- gguf-my-repo
|
11 |
+
- llama
|
12 |
+
- text-generation-inference
|
13 |
+
language:
|
14 |
+
- en
|
15 |
---
|
16 |
|
17 |
# Devarui379/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF
|
18 |
This model was converted to GGUF format
|
19 |
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model.
|
20 |
|
21 |
+
You can use it with LM Studio or other mentioned methods below.
|
22 |
+
|
23 |
## Use with llama.cpp
|
24 |
Install llama.cpp through brew (works on Mac and Linux)
|
25 |
|
|
|
58 |
or
|
59 |
```
|
60 |
./llama-server --hf-repo Devarui379/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -c 2048
|
61 |
+
```
|