TheBloke commited on
Commit
a0c033a
1 Parent(s): 0b0e1aa

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,10 +1,15 @@
1
  ---
 
 
 
2
  inference: false
 
 
3
  license: cc-by-nc-4.0
4
  model_creator: AIDC-ai-business
5
- model_link: https://huggingface.co/AIDC-ai-business/Marcoroni-7b
6
  model_name: Marcoroni 7b
7
  model_type: llama
 
8
  quantized_by: TheBloke
9
  ---
10
 
@@ -62,11 +67,16 @@ Here is an incomplate list of clients and libraries that are known to support GG
62
  <!-- repositories-available end -->
63
 
64
  <!-- prompt-template start -->
65
- ## Prompt template: Unknown
66
 
67
  ```
 
 
 
68
  {prompt}
69
 
 
 
70
  ```
71
 
72
  <!-- prompt-template end -->
@@ -131,7 +141,7 @@ Refer to the Provided Files table below to see what files use which methods, and
131
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
132
 
133
  ```shell
134
- ./main -ngl 32 -m marcoroni-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
135
  ```
136
 
137
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -222,5 +232,37 @@ And thank you again to a16z for their generous grant.
222
  <!-- original-model-card start -->
223
  # Original model card: AIDC-ai-business's Marcoroni 7b
224
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
 
226
  <!-- original-model-card end -->
 
1
  ---
2
+ base_model: https://huggingface.co/AIDC-ai-business/Marcoroni-7b
3
+ datasets:
4
+ - Open-Orca/OpenOrca
5
  inference: false
6
+ language:
7
+ - en
8
  license: cc-by-nc-4.0
9
  model_creator: AIDC-ai-business
 
10
  model_name: Marcoroni 7b
11
  model_type: llama
12
+ pipeline_tag: text-generation
13
  quantized_by: TheBloke
14
  ---
15
 
 
67
  <!-- repositories-available end -->
68
 
69
  <!-- prompt-template start -->
70
+ ## Prompt template: Alpaca
71
 
72
  ```
73
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
74
+
75
+ ### Instruction:
76
  {prompt}
77
 
78
+ ### Response:
79
+
80
  ```
81
 
82
  <!-- prompt-template end -->
 
141
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
142
 
143
  ```shell
144
+ ./main -ngl 32 -m marcoroni-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
145
  ```
146
 
147
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
232
  <!-- original-model-card start -->
233
  # Original model card: AIDC-ai-business's Marcoroni 7b
234
 
235
+ # Marcoroni-7B
236
+ Fine-tuned from Llama2-7B,we use Orca-style data and other open source data for fine-tuning.
237
+
238
+ # Model Details
239
+ * **Trained by**: trained by AIDC AI-Business.
240
+ * **Model type:** **Marcoroni-7B** is an auto-regressive language model based on the Llama 2 transformer architecture.
241
+ * **Language(s)**: English
242
+ * **License for Marcoroni-7B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
243
+
244
+
245
+ # Prompting
246
+
247
+ ## Prompt Template for alpaca style
248
+
249
+ ```
250
+ ### Instruction:
251
+
252
+ <prompt> (without the <>)
253
+
254
+ ### Response:
255
+ ```
256
+
257
+
258
+ # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
259
+
260
+ | Metric | Value |
261
+ |-----------------------|-------|
262
+ | Avg. | 60.1 |
263
+ | ARC (25-shot) | 58.11 |
264
+ | HellaSwag (10-shot) | 80.08 |
265
+ | MMLU (5-shot) | 51.36 |
266
+ | TruthfulQA (0-shot) | 50.85 |
267
 
268
  <!-- original-model-card end -->