Novaciano commited on
Commit
0ffd739
·
verified ·
1 Parent(s): 0a72282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -44
README.md CHANGED
@@ -12,6 +12,12 @@ language:
12
  pipeline_tag: text-generation
13
  tags:
14
  - triangulum_1b
 
 
 
 
 
 
15
  - sft
16
  - chain_of_thought
17
  - ollama
@@ -20,7 +26,7 @@ tags:
20
  - reasoning
21
  - CoT
22
  - llama-cpp
23
- - gguf-my-repo
24
  library_name: transformers
25
  metrics:
26
  - code_eval
@@ -28,48 +34,8 @@ metrics:
28
  - competition_math
29
  - character
30
  base_model: prithivMLmods/Triangulum-1B
 
 
31
  ---
32
 
33
- # Novaciano/Triangulum-1B-Q4_K_M-GGUF
34
- This model was converted to GGUF format from [`prithivMLmods/Triangulum-1B`](https://huggingface.co/prithivMLmods/Triangulum-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
35
- Refer to the [original model card](https://huggingface.co/prithivMLmods/Triangulum-1B) for more details on the model.
36
-
37
- ## Use with llama.cpp
38
- Install llama.cpp through brew (works on Mac and Linux)
39
-
40
- ```bash
41
- brew install llama.cpp
42
-
43
- ```
44
- Invoke the llama.cpp server or the CLI.
45
-
46
- ### CLI:
47
- ```bash
48
- llama-cli --hf-repo Novaciano/Triangulum-1B-Q4_K_M-GGUF --hf-file triangulum-1b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
49
- ```
50
-
51
- ### Server:
52
- ```bash
53
- llama-server --hf-repo Novaciano/Triangulum-1B-Q4_K_M-GGUF --hf-file triangulum-1b-q4_k_m-imat.gguf -c 2048
54
- ```
55
-
56
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
57
-
58
- Step 1: Clone llama.cpp from GitHub.
59
- ```
60
- git clone https://github.com/ggerganov/llama.cpp
61
- ```
62
-
63
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
64
- ```
65
- cd llama.cpp && LLAMA_CURL=1 make
66
- ```
67
-
68
- Step 3: Run inference through the main binary.
69
- ```
70
- ./llama-cli --hf-repo Novaciano/Triangulum-1B-Q4_K_M-GGUF --hf-file triangulum-1b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
71
- ```
72
- or
73
- ```
74
- ./llama-server --hf-repo Novaciano/Triangulum-1B-Q4_K_M-GGUF --hf-file triangulum-1b-q4_k_m-imat.gguf -c 2048
75
- ```
 
12
  pipeline_tag: text-generation
13
  tags:
14
  - triangulum_1b
15
+ - 1b
16
+ - NSFW
17
+ - 4-bit
18
+ - Uncensored
19
+ - RP
20
+ - Roleplay
21
  - sft
22
  - chain_of_thought
23
  - ollama
 
26
  - reasoning
27
  - CoT
28
  - llama-cpp
29
+ - not-for-all-audiences
30
  library_name: transformers
31
  metrics:
32
  - code_eval
 
34
  - competition_math
35
  - character
36
  base_model: prithivMLmods/Triangulum-1B
37
+ datasets:
38
+ - Chaser-cz/DPO_Pairs-Roleplay-NSFW
39
  ---
40
 
41
+ # TRIANGULUM 1B DPO ROLEPLAY NSFW