Triangle104 commited on
Commit
da3a1b6
·
verified ·
1 Parent(s): a443c6b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +0 -243
README.md CHANGED
@@ -47,249 +47,6 @@ tags:
47
  This model was converted to GGUF format from [`utter-project/EuroLLM-9B-Instruct`](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
48
  Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) for more details on the model.
49
 
50
- ---
51
- Model details:
52
- -
53
- This is the model card for EuroLLM-9B-Instruct. You can also check the pre-trained version: EuroLLM-9B.
54
-
55
-
56
- Developed by: Unbabel, Instituto Superior Técnico,
57
- Instituto de Telecomunicações, University of Edinburgh, Aveni,
58
- University of Paris-Saclay, University of Amsterdam, Naver Labs,
59
- Sorbonne Université.
60
- Funded by: European Union.
61
- Model type: A 9B parameter multilingual transfomer LLM.
62
- Language(s) (NLP): Bulgarian, Croatian, Czech,
63
- Danish, Dutch, English, Estonian, Finnish, French, German, Greek,
64
- Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish,
65
- Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic,
66
- Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian,
67
- Turkish, and Ukrainian.
68
- License: Apache License 2.0.
69
-
70
-
71
-
72
-
73
-
74
-
75
-
76
- Model Details
77
-
78
-
79
-
80
-
81
- The EuroLLM project has the goal of creating a suite of LLMs capable
82
- of understanding and generating text in all European Union languages as
83
- well as some additional relevant languages.
84
- EuroLLM-9B is a 9B parameter model trained on 4 trillion tokens divided
85
- across the considered languages and several data sources: Web data,
86
- parallel data (en-xx and xx-en), and high-quality datasets.
87
- EuroLLM-9B-Instruct was further instruction tuned on EuroBlocks, an
88
- instruction tuning dataset with focus on general instruction-following
89
- and machine translation.
90
-
91
-
92
-
93
-
94
-
95
-
96
-
97
- Model Description
98
-
99
-
100
-
101
-
102
- EuroLLM uses a standard, dense Transformer architecture:
103
-
104
-
105
- We use grouped query attention (GQA) with 8 key-value heads, since
106
- it has been shown to increase speed at inference time while maintaining
107
- downstream performance.
108
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
109
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
110
- We use rotary positional embeddings (RoPE) in every layer, since
111
- these have been shown to lead to good performances while allowing the
112
- extension of the context length.
113
-
114
-
115
- For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5
116
- supercomputer, training the model with a constant batch size of 2,800
117
- sequences, which corresponds to approximately 12 million tokens, using
118
- the Adam optimizer, and BF16 precision.
119
- Here is a summary of the model hyper-parameters:
120
-
121
-
122
-
123
-
124
-
125
-
126
-
127
-
128
-
129
-
130
- Sequence Length
131
- 4,096
132
-
133
-
134
- Number of Layers
135
- 42
136
-
137
-
138
- Embedding Size
139
- 4,096
140
-
141
-
142
- FFN Hidden Size
143
- 12,288
144
-
145
-
146
- Number of Heads
147
- 32
148
-
149
-
150
- Number of KV Heads (GQA)
151
- 8
152
-
153
-
154
- Activation Function
155
- SwiGLU
156
-
157
-
158
- Position Encodings
159
- RoPE (\Theta=10,000)
160
-
161
-
162
- Layer Norm
163
- RMSNorm
164
-
165
-
166
- Tied Embeddings
167
- No
168
-
169
-
170
- Embedding Parameters
171
- 0.524B
172
-
173
-
174
- LM Head Parameters
175
- 0.524B
176
-
177
-
178
- Non-embedding Parameters
179
- 8.105B
180
-
181
-
182
- Total Parameters
183
- 9.154B
184
-
185
-
186
-
187
-
188
-
189
-
190
-
191
-
192
-
193
-
194
-
195
- Run the model
196
-
197
-
198
-
199
-
200
- from transformers import AutoModelForCausalLM, AutoTokenizer
201
-
202
- model_id = "utter-project/EuroLLM-9B-Instruct"
203
- tokenizer = AutoTokenizer.from_pretrained(model_id)
204
- model = AutoModelForCausalLM.from_pretrained(model_id)
205
-
206
- messages = [
207
- {
208
- "role": "system",
209
- "content": "You are EuroLLM --- an AI assistant specialized in European languages that provides safe, educational and helpful answers.",
210
- },
211
- {
212
- "role": "user", "content": "What is the capital of Portugal? How would you describe it?"
213
- },
214
- ]
215
-
216
- inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
217
- outputs = model.generate(inputs, max_new_tokens=1024)
218
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
219
-
220
-
221
-
222
-
223
-
224
-
225
-
226
-
227
- Results
228
-
229
-
230
-
231
-
232
-
233
-
234
-
235
-
236
-
237
- EU Languages
238
-
239
-
240
-
241
-
242
-
243
- Table 1: Comparison of open-weight LLMs on multilingual
244
- benchmarks. The borda count corresponds to the average ranking of the
245
- models (see (Colombo et al., 2022)). For Arc-challenge, Hellaswag, and MMLU we are using Okapi datasets (Lai et al., 2023) which include 11 languages. For MMLU-Pro and MUSR we translate the English version with Tower (Alves et al., 2024) to 6 EU languages.
246
- * As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
247
-
248
-
249
- The results in Table 1 highlight EuroLLM-9B's superior performance on
250
- multilingual tasks compared to other European-developed models (as
251
- shown by the Borda count of 1.0), as well as its strong competitiveness
252
- with non-European models, achieving results comparable to Gemma-2-9B and
253
- outperforming the rest on most benchmarks.
254
-
255
-
256
-
257
-
258
-
259
-
260
-
261
- English
262
-
263
-
264
-
265
-
266
-
267
-
268
-
269
- Table 2: Comparison of open-weight LLMs on English general benchmarks.
270
- * As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
271
-
272
-
273
- The results in Table 2 demonstrate EuroLLM's strong performance on
274
- English tasks, surpassing most European-developed models and matching
275
- the performance of Mistral-7B (obtaining the same Borda count).
276
-
277
-
278
-
279
-
280
-
281
-
282
-
283
- Bias, Risks, and Limitations
284
-
285
-
286
-
287
-
288
- EuroLLM-9B has not been aligned to human preferences, so the model
289
- may generate problematic outputs (e.g., hallucinations, harmful content,
290
- or false statements).
291
-
292
- ---
293
  ## Use with llama.cpp
294
  Install llama.cpp through brew (works on Mac and Linux)
295
 
 
47
  This model was converted to GGUF format from [`utter-project/EuroLLM-9B-Instruct`](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
48
  Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) for more details on the model.
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ## Use with llama.cpp
51
  Install llama.cpp through brew (works on Mac and Linux)
52