itlwas commited on
Commit
c7094aa
1 Parent(s): cd0c047

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +202 -0
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - axolotl
5
+ - generated_from_trainer
6
+ - Mistral
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - science
13
+ - physics
14
+ - chemistry
15
+ - biology
16
+ - math
17
+ - llama-cpp
18
+ - gguf-my-repo
19
+ base_model: Weyaxi/Einstein-v5-v0.2-7B
20
+ datasets:
21
+ - allenai/ai2_arc
22
+ - camel-ai/physics
23
+ - camel-ai/chemistry
24
+ - camel-ai/biology
25
+ - camel-ai/math
26
+ - metaeval/reclor
27
+ - openbookqa
28
+ - mandyyyyii/scibench
29
+ - derek-thomas/ScienceQA
30
+ - TIGER-Lab/ScienceEval
31
+ - jondurbin/airoboros-3.2
32
+ - LDJnr/Capybara
33
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
34
+ - STEM-AI-mtl/Electrical-engineering
35
+ - knowrohit07/saraswati-stem
36
+ - sablo/oasst2_curated
37
+ - lmsys/lmsys-chat-1m
38
+ - TIGER-Lab/MathInstruct
39
+ - bigbio/med_qa
40
+ - meta-math/MetaMathQA-40K
41
+ - openbookqa
42
+ - piqa
43
+ - metaeval/reclor
44
+ - derek-thomas/ScienceQA
45
+ - scibench
46
+ - sciq
47
+ - Open-Orca/SlimOrca
48
+ - migtissera/Synthia-v1.3
49
+ - TIGER-Lab/ScienceEval
50
+ - allenai/WildChat
51
+ - microsoft/orca-math-word-problems-200k
52
+ - openchat/openchat_sharegpt4_dataset
53
+ - teknium/GPTeacher-General-Instruct
54
+ - m-a-p/CodeFeedback-Filtered-Instruction
55
+ model-index:
56
+ - name: Einstein-v5-v0.2-7B
57
+ results:
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: AI2 Reasoning Challenge (25-Shot)
63
+ type: ai2_arc
64
+ config: ARC-Challenge
65
+ split: test
66
+ args:
67
+ num_few_shot: 25
68
+ metrics:
69
+ - type: acc_norm
70
+ value: 60.92
71
+ name: normalized accuracy
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: HellaSwag (10-Shot)
80
+ type: hellaswag
81
+ split: validation
82
+ args:
83
+ num_few_shot: 10
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 80.99
87
+ name: normalized accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU (5-Shot)
96
+ type: cais/mmlu
97
+ config: all
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 61.02
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
107
+ name: Open LLM Leaderboard
108
+ - task:
109
+ type: text-generation
110
+ name: Text Generation
111
+ dataset:
112
+ name: TruthfulQA (0-shot)
113
+ type: truthful_qa
114
+ config: multiple_choice
115
+ split: validation
116
+ args:
117
+ num_few_shot: 0
118
+ metrics:
119
+ - type: mc2
120
+ value: 52.59
121
+ source:
122
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
123
+ name: Open LLM Leaderboard
124
+ - task:
125
+ type: text-generation
126
+ name: Text Generation
127
+ dataset:
128
+ name: Winogrande (5-shot)
129
+ type: winogrande
130
+ config: winogrande_xl
131
+ split: validation
132
+ args:
133
+ num_few_shot: 5
134
+ metrics:
135
+ - type: acc
136
+ value: 78.69
137
+ name: accuracy
138
+ source:
139
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
140
+ name: Open LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: GSM8k (5-shot)
146
+ type: gsm8k
147
+ config: main
148
+ split: test
149
+ args:
150
+ num_few_shot: 5
151
+ metrics:
152
+ - type: acc
153
+ value: 59.67
154
+ name: accuracy
155
+ source:
156
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
157
+ name: Open LLM Leaderboard
158
+ ---
159
+
160
+ # AIronMind/Einstein-v5-v0.2-7B-Q4_K_M-GGUF
161
+ This model was converted to GGUF format from [`Weyaxi/Einstein-v5-v0.2-7B`](https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
162
+ Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B) for more details on the model.
163
+
164
+ ## Use with llama.cpp
165
+ Install llama.cpp through brew (works on Mac and Linux)
166
+
167
+ ```bash
168
+ brew install llama.cpp
169
+
170
+ ```
171
+ Invoke the llama.cpp server or the CLI.
172
+
173
+ ### CLI:
174
+ ```bash
175
+ llama-cli --hf-repo AIronMind/Einstein-v5-v0.2-7B-Q4_K_M-GGUF --hf-file einstein-v5-v0.2-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
176
+ ```
177
+
178
+ ### Server:
179
+ ```bash
180
+ llama-server --hf-repo AIronMind/Einstein-v5-v0.2-7B-Q4_K_M-GGUF --hf-file einstein-v5-v0.2-7b-q4_k_m.gguf -c 2048
181
+ ```
182
+
183
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
184
+
185
+ Step 1: Clone llama.cpp from GitHub.
186
+ ```
187
+ git clone https://github.com/ggerganov/llama.cpp
188
+ ```
189
+
190
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
191
+ ```
192
+ cd llama.cpp && LLAMA_CURL=1 make
193
+ ```
194
+
195
+ Step 3: Run inference through the main binary.
196
+ ```
197
+ ./llama-cli --hf-repo AIronMind/Einstein-v5-v0.2-7B-Q4_K_M-GGUF --hf-file einstein-v5-v0.2-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
198
+ ```
199
+ or
200
+ ```
201
+ ./llama-server --hf-repo AIronMind/Einstein-v5-v0.2-7B-Q4_K_M-GGUF --hf-file einstein-v5-v0.2-7b-q4_k_m.gguf -c 2048
202
+ ```