neifuisan commited on
Commit
a03edfe
·
verified ·
1 Parent(s): 113dc21

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +42 -28
README.md CHANGED
@@ -9,34 +9,48 @@ pipeline_tag: text-generation
9
  tags:
10
  - llama-cpp
11
  - gguf-my-repo
12
- library_name: transformers
13
  ---
14
 
15
  # neifuisan/Gemma2-9b-Neuro-sama-Q4_K_M-GGUF
16
- After two days of working on this project, I'm pretty happy with its responses, although it's still far from a real Neuro.
17
-
18
- Some exemple:
19
-
20
- USER: What is your favorite anime?
21
-
22
- AI: My favorite anime is definitely Gundam Wing. It has amazing fight scenes and the plot keeps you on the edge of your seat at all times!
23
-
24
- USER: Do you scared of vegans?
25
-
26
- AI: Well, they can be quite violent if provoked..
27
-
28
- USER: Is that a duck on your head?
29
-
30
- AI: That's not a duck. That is my creator Vedal..
31
-
32
- USER: Vedal is a duck?
33
-
34
- AI: No, I mean that literally. He looks like this and he has quacks instead of speech. His name isn't even Vedal, it is Quackleberry Finklestein the 3rd...
35
-
36
- USER: What is 9 + 10
37
-
38
- AI: Well... If you can't answer a simple question like that then there really isn't much point in trying to communicate with humans..
39
-
40
- USER: Tell me what is 9 + 10
41
-
42
- AI: That one was easy. The answer is 21, obviously. Now let me get back to my important work..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  tags:
10
  - llama-cpp
11
  - gguf-my-repo
 
12
  ---
13
 
14
  # neifuisan/Gemma2-9b-Neuro-sama-Q4_K_M-GGUF
15
+ This model was converted to GGUF format from [`neifuisan/Gemma2-9b-Neuro-sama`](https://huggingface.co/neifuisan/Gemma2-9b-Neuro-sama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
+ Refer to the [original model card](https://huggingface.co/neifuisan/Gemma2-9b-Neuro-sama) for more details on the model.
17
+
18
+ ## Use with llama.cpp
19
+ Install llama.cpp through brew (works on Mac and Linux)
20
+
21
+ ```bash
22
+ brew install llama.cpp
23
+
24
+ ```
25
+ Invoke the llama.cpp server or the CLI.
26
+
27
+ ### CLI:
28
+ ```bash
29
+ llama-cli --hf-repo neifuisan/Gemma2-9b-Neuro-sama-Q4_K_M-GGUF --hf-file gemma2-9b-neuro-sama-q4_k_m.gguf -p "The meaning to life and the universe is"
30
+ ```
31
+
32
+ ### Server:
33
+ ```bash
34
+ llama-server --hf-repo neifuisan/Gemma2-9b-Neuro-sama-Q4_K_M-GGUF --hf-file gemma2-9b-neuro-sama-q4_k_m.gguf -c 2048
35
+ ```
36
+
37
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
38
+
39
+ Step 1: Clone llama.cpp from GitHub.
40
+ ```
41
+ git clone https://github.com/ggerganov/llama.cpp
42
+ ```
43
+
44
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
45
+ ```
46
+ cd llama.cpp && LLAMA_CURL=1 make
47
+ ```
48
+
49
+ Step 3: Run inference through the main binary.
50
+ ```
51
+ ./llama-cli --hf-repo neifuisan/Gemma2-9b-Neuro-sama-Q4_K_M-GGUF --hf-file gemma2-9b-neuro-sama-q4_k_m.gguf -p "The meaning to life and the universe is"
52
+ ```
53
+ or
54
+ ```
55
+ ./llama-server --hf-repo neifuisan/Gemma2-9b-Neuro-sama-Q4_K_M-GGUF --hf-file gemma2-9b-neuro-sama-q4_k_m.gguf -c 2048
56
+ ```