wendys-llc commited on
Commit
923855a
·
verified ·
1 Parent(s): 2b1753f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -2,11 +2,31 @@
2
  tags:
3
  - llama-cpp
4
  - gguf-my-repo
 
 
 
 
5
  ---
6
 
7
  # wendys-llc/unsloth-attempt-Q8_0-GGUF
8
  This model was converted to GGUF format from [`wendys-llc/unsloth-attempt`](https://huggingface.co/wendys-llc/unsloth-attempt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
9
  Refer to the [original model card](https://huggingface.co/wendys-llc/unsloth-attempt) for more details on the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ## Use with llama.cpp
11
 
12
  Install llama.cpp through brew.
@@ -32,4 +52,4 @@ Note: You can also use this checkpoint directly through the [usage steps](https:
32
 
33
  ```
34
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m unsloth-attempt.Q8_0.gguf -n 128
35
- ```
 
2
  tags:
3
  - llama-cpp
4
  - gguf-my-repo
5
+ - text-generation-inference
6
+ datasets:
7
+ - wendys-llc/domestic-receipts
8
+ pipeline_tag: text2text-generation
9
  ---
10
 
11
  # wendys-llc/unsloth-attempt-Q8_0-GGUF
12
  This model was converted to GGUF format from [`wendys-llc/unsloth-attempt`](https://huggingface.co/wendys-llc/unsloth-attempt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/wendys-llc/unsloth-attempt) for more details on the model.
14
+
15
+ ## Prompt
16
+
17
+ ```
18
+ Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
19
+
20
+ ### Instruction:
21
+ Use the Input below to explain a task or topic
22
+
23
+ ### Input:
24
+ {}
25
+
26
+ ### Response:
27
+ {}
28
+ ```
29
+
30
  ## Use with llama.cpp
31
 
32
  Install llama.cpp through brew.
 
52
 
53
  ```
54
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m unsloth-attempt.Q8_0.gguf -n 128
55
+ ```