Markobes commited on
Commit
c28214c
·
verified ·
1 Parent(s): a6e1e9b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +740 -0
README.md ADDED
@@ -0,0 +1,740 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - bigscience/xP3mt
4
+ - mc4
5
+ license: apache-2.0
6
+ language:
7
+ - af
8
+ - am
9
+ - ar
10
+ - az
11
+ - be
12
+ - bg
13
+ - bn
14
+ - ca
15
+ - ceb
16
+ - co
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - eo
24
+ - es
25
+ - et
26
+ - eu
27
+ - fa
28
+ - fi
29
+ - fil
30
+ - fr
31
+ - fy
32
+ - ga
33
+ - gd
34
+ - gl
35
+ - gu
36
+ - ha
37
+ - haw
38
+ - hi
39
+ - hmn
40
+ - ht
41
+ - hu
42
+ - hy
43
+ - ig
44
+ - is
45
+ - it
46
+ - iw
47
+ - ja
48
+ - jv
49
+ - ka
50
+ - kk
51
+ - km
52
+ - kn
53
+ - ko
54
+ - ku
55
+ - ky
56
+ - la
57
+ - lb
58
+ - lo
59
+ - lt
60
+ - lv
61
+ - mg
62
+ - mi
63
+ - mk
64
+ - ml
65
+ - mn
66
+ - mr
67
+ - ms
68
+ - mt
69
+ - my
70
+ - ne
71
+ - nl
72
+ - 'no'
73
+ - ny
74
+ - pa
75
+ - pl
76
+ - ps
77
+ - pt
78
+ - ro
79
+ - ru
80
+ - sd
81
+ - si
82
+ - sk
83
+ - sl
84
+ - sm
85
+ - sn
86
+ - so
87
+ - sq
88
+ - sr
89
+ - st
90
+ - su
91
+ - sv
92
+ - sw
93
+ - ta
94
+ - te
95
+ - tg
96
+ - th
97
+ - tr
98
+ - uk
99
+ - und
100
+ - ur
101
+ - uz
102
+ - vi
103
+ - xh
104
+ - yi
105
+ - yo
106
+ - zh
107
+ - zu
108
+ tags:
109
+ - text2text-generation
110
+ - llama-cpp
111
+ - gguf-my-repo
112
+ widget:
113
+ - text: Life is beautiful! Translate to Mongolian.
114
+ example_title: mn-en translation
115
+ - text: Le mot japonais «憂鬱» veut dire quoi en Odia?
116
+ example_title: jp-or-fr translation
117
+ - text: Stell mir eine schwierige Quiz Frage bei der es um Astronomie geht. Bitte
118
+ stell die Frage auf Norwegisch.
119
+ example_title: de-nb quiz
120
+ - text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
121
+ review as positive, neutral or negative?
122
+ example_title: zh-en sentiment
123
+ - text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
124
+ example_title: zh-zh sentiment
125
+ - text: Suggest at least five related search terms to "Mạng neural nhân tạo".
126
+ example_title: vi-en query
127
+ - text: Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels».
128
+ example_title: fr-fr query
129
+ - text: Explain in a sentence in Telugu what is backpropagation in neural networks.
130
+ example_title: te-en qa
131
+ - text: Why is the sky blue?
132
+ example_title: en-en qa
133
+ - text: 'Write a fairy tale about a troll saving a princess from a dangerous dragon.
134
+ The fairy tale is a masterpiece that has achieved praise worldwide and its moral
135
+ is "Heroes Come in All Shapes and Sizes". Story (in Spanish):'
136
+ example_title: es-en fable
137
+ - text: 'Write a fable about wood elves living in a forest that is suddenly invaded
138
+ by ogres. The fable is a masterpiece that has achieved praise worldwide and its
139
+ moral is "Violence is the last refuge of the incompetent". Fable (in Hindi):'
140
+ example_title: hi-en fable
141
+ pipeline_tag: text2text-generation
142
+ base_model: bigscience/mt0-xxl-mt
143
+ model-index:
144
+ - name: mt0-xxl-mt
145
+ results:
146
+ - task:
147
+ type: Coreference resolution
148
+ dataset:
149
+ name: Winogrande XL (xl)
150
+ type: winogrande
151
+ config: xl
152
+ split: validation
153
+ revision: a80f460359d1e9a67c006011c94de42a8759430c
154
+ metrics:
155
+ - type: Accuracy
156
+ value: 62.67
157
+ - task:
158
+ type: Coreference resolution
159
+ dataset:
160
+ name: XWinograd (en)
161
+ type: Muennighoff/xwinograd
162
+ config: en
163
+ split: test
164
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
165
+ metrics:
166
+ - type: Accuracy
167
+ value: 83.31
168
+ - task:
169
+ type: Coreference resolution
170
+ dataset:
171
+ name: XWinograd (fr)
172
+ type: Muennighoff/xwinograd
173
+ config: fr
174
+ split: test
175
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
176
+ metrics:
177
+ - type: Accuracy
178
+ value: 78.31
179
+ - task:
180
+ type: Coreference resolution
181
+ dataset:
182
+ name: XWinograd (jp)
183
+ type: Muennighoff/xwinograd
184
+ config: jp
185
+ split: test
186
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
187
+ metrics:
188
+ - type: Accuracy
189
+ value: 80.19
190
+ - task:
191
+ type: Coreference resolution
192
+ dataset:
193
+ name: XWinograd (pt)
194
+ type: Muennighoff/xwinograd
195
+ config: pt
196
+ split: test
197
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
198
+ metrics:
199
+ - type: Accuracy
200
+ value: 80.99
201
+ - task:
202
+ type: Coreference resolution
203
+ dataset:
204
+ name: XWinograd (ru)
205
+ type: Muennighoff/xwinograd
206
+ config: ru
207
+ split: test
208
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
209
+ metrics:
210
+ - type: Accuracy
211
+ value: 79.05
212
+ - task:
213
+ type: Coreference resolution
214
+ dataset:
215
+ name: XWinograd (zh)
216
+ type: Muennighoff/xwinograd
217
+ config: zh
218
+ split: test
219
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
220
+ metrics:
221
+ - type: Accuracy
222
+ value: 82.34
223
+ - task:
224
+ type: Natural language inference
225
+ dataset:
226
+ name: ANLI (r1)
227
+ type: anli
228
+ config: r1
229
+ split: validation
230
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
231
+ metrics:
232
+ - type: Accuracy
233
+ value: 49.5
234
+ - task:
235
+ type: Natural language inference
236
+ dataset:
237
+ name: ANLI (r2)
238
+ type: anli
239
+ config: r2
240
+ split: validation
241
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
242
+ metrics:
243
+ - type: Accuracy
244
+ value: 42
245
+ - task:
246
+ type: Natural language inference
247
+ dataset:
248
+ name: ANLI (r3)
249
+ type: anli
250
+ config: r3
251
+ split: validation
252
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
253
+ metrics:
254
+ - type: Accuracy
255
+ value: 48.17
256
+ - task:
257
+ type: Natural language inference
258
+ dataset:
259
+ name: SuperGLUE (cb)
260
+ type: super_glue
261
+ config: cb
262
+ split: validation
263
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
264
+ metrics:
265
+ - type: Accuracy
266
+ value: 87.5
267
+ - task:
268
+ type: Natural language inference
269
+ dataset:
270
+ name: SuperGLUE (rte)
271
+ type: super_glue
272
+ config: rte
273
+ split: validation
274
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
275
+ metrics:
276
+ - type: Accuracy
277
+ value: 84.84
278
+ - task:
279
+ type: Natural language inference
280
+ dataset:
281
+ name: XNLI (ar)
282
+ type: xnli
283
+ config: ar
284
+ split: validation
285
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
286
+ metrics:
287
+ - type: Accuracy
288
+ value: 58.03
289
+ - task:
290
+ type: Natural language inference
291
+ dataset:
292
+ name: XNLI (bg)
293
+ type: xnli
294
+ config: bg
295
+ split: validation
296
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
297
+ metrics:
298
+ - type: Accuracy
299
+ value: 59.92
300
+ - task:
301
+ type: Natural language inference
302
+ dataset:
303
+ name: XNLI (de)
304
+ type: xnli
305
+ config: de
306
+ split: validation
307
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
308
+ metrics:
309
+ - type: Accuracy
310
+ value: 60.16
311
+ - task:
312
+ type: Natural language inference
313
+ dataset:
314
+ name: XNLI (el)
315
+ type: xnli
316
+ config: el
317
+ split: validation
318
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
319
+ metrics:
320
+ - type: Accuracy
321
+ value: 59.2
322
+ - task:
323
+ type: Natural language inference
324
+ dataset:
325
+ name: XNLI (en)
326
+ type: xnli
327
+ config: en
328
+ split: validation
329
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
330
+ metrics:
331
+ - type: Accuracy
332
+ value: 62.25
333
+ - task:
334
+ type: Natural language inference
335
+ dataset:
336
+ name: XNLI (es)
337
+ type: xnli
338
+ config: es
339
+ split: validation
340
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
341
+ metrics:
342
+ - type: Accuracy
343
+ value: 60.92
344
+ - task:
345
+ type: Natural language inference
346
+ dataset:
347
+ name: XNLI (fr)
348
+ type: xnli
349
+ config: fr
350
+ split: validation
351
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
352
+ metrics:
353
+ - type: Accuracy
354
+ value: 59.88
355
+ - task:
356
+ type: Natural language inference
357
+ dataset:
358
+ name: XNLI (hi)
359
+ type: xnli
360
+ config: hi
361
+ split: validation
362
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
363
+ metrics:
364
+ - type: Accuracy
365
+ value: 57.47
366
+ - task:
367
+ type: Natural language inference
368
+ dataset:
369
+ name: XNLI (ru)
370
+ type: xnli
371
+ config: ru
372
+ split: validation
373
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
374
+ metrics:
375
+ - type: Accuracy
376
+ value: 58.67
377
+ - task:
378
+ type: Natural language inference
379
+ dataset:
380
+ name: XNLI (sw)
381
+ type: xnli
382
+ config: sw
383
+ split: validation
384
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
385
+ metrics:
386
+ - type: Accuracy
387
+ value: 56.79
388
+ - task:
389
+ type: Natural language inference
390
+ dataset:
391
+ name: XNLI (th)
392
+ type: xnli
393
+ config: th
394
+ split: validation
395
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
396
+ metrics:
397
+ - type: Accuracy
398
+ value: 58.03
399
+ - task:
400
+ type: Natural language inference
401
+ dataset:
402
+ name: XNLI (tr)
403
+ type: xnli
404
+ config: tr
405
+ split: validation
406
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
407
+ metrics:
408
+ - type: Accuracy
409
+ value: 57.67
410
+ - task:
411
+ type: Natural language inference
412
+ dataset:
413
+ name: XNLI (ur)
414
+ type: xnli
415
+ config: ur
416
+ split: validation
417
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
418
+ metrics:
419
+ - type: Accuracy
420
+ value: 55.98
421
+ - task:
422
+ type: Natural language inference
423
+ dataset:
424
+ name: XNLI (vi)
425
+ type: xnli
426
+ config: vi
427
+ split: validation
428
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
429
+ metrics:
430
+ - type: Accuracy
431
+ value: 58.92
432
+ - task:
433
+ type: Natural language inference
434
+ dataset:
435
+ name: XNLI (zh)
436
+ type: xnli
437
+ config: zh
438
+ split: validation
439
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
440
+ metrics:
441
+ - type: Accuracy
442
+ value: 58.71
443
+ - task:
444
+ type: Sentence completion
445
+ dataset:
446
+ name: StoryCloze (2016)
447
+ type: story_cloze
448
+ config: '2016'
449
+ split: validation
450
+ revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
451
+ metrics:
452
+ - type: Accuracy
453
+ value: 94.66
454
+ - task:
455
+ type: Sentence completion
456
+ dataset:
457
+ name: SuperGLUE (copa)
458
+ type: super_glue
459
+ config: copa
460
+ split: validation
461
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
462
+ metrics:
463
+ - type: Accuracy
464
+ value: 88
465
+ - task:
466
+ type: Sentence completion
467
+ dataset:
468
+ name: XCOPA (et)
469
+ type: xcopa
470
+ config: et
471
+ split: validation
472
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
473
+ metrics:
474
+ - type: Accuracy
475
+ value: 81
476
+ - task:
477
+ type: Sentence completion
478
+ dataset:
479
+ name: XCOPA (ht)
480
+ type: xcopa
481
+ config: ht
482
+ split: validation
483
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
484
+ metrics:
485
+ - type: Accuracy
486
+ value: 79
487
+ - task:
488
+ type: Sentence completion
489
+ dataset:
490
+ name: XCOPA (id)
491
+ type: xcopa
492
+ config: id
493
+ split: validation
494
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
495
+ metrics:
496
+ - type: Accuracy
497
+ value: 90
498
+ - task:
499
+ type: Sentence completion
500
+ dataset:
501
+ name: XCOPA (it)
502
+ type: xcopa
503
+ config: it
504
+ split: validation
505
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
506
+ metrics:
507
+ - type: Accuracy
508
+ value: 88
509
+ - task:
510
+ type: Sentence completion
511
+ dataset:
512
+ name: XCOPA (qu)
513
+ type: xcopa
514
+ config: qu
515
+ split: validation
516
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
517
+ metrics:
518
+ - type: Accuracy
519
+ value: 56
520
+ - task:
521
+ type: Sentence completion
522
+ dataset:
523
+ name: XCOPA (sw)
524
+ type: xcopa
525
+ config: sw
526
+ split: validation
527
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
528
+ metrics:
529
+ - type: Accuracy
530
+ value: 81
531
+ - task:
532
+ type: Sentence completion
533
+ dataset:
534
+ name: XCOPA (ta)
535
+ type: xcopa
536
+ config: ta
537
+ split: validation
538
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
539
+ metrics:
540
+ - type: Accuracy
541
+ value: 81
542
+ - task:
543
+ type: Sentence completion
544
+ dataset:
545
+ name: XCOPA (th)
546
+ type: xcopa
547
+ config: th
548
+ split: validation
549
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
550
+ metrics:
551
+ - type: Accuracy
552
+ value: 76
553
+ - task:
554
+ type: Sentence completion
555
+ dataset:
556
+ name: XCOPA (tr)
557
+ type: xcopa
558
+ config: tr
559
+ split: validation
560
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
561
+ metrics:
562
+ - type: Accuracy
563
+ value: 76
564
+ - task:
565
+ type: Sentence completion
566
+ dataset:
567
+ name: XCOPA (vi)
568
+ type: xcopa
569
+ config: vi
570
+ split: validation
571
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
572
+ metrics:
573
+ - type: Accuracy
574
+ value: 85
575
+ - task:
576
+ type: Sentence completion
577
+ dataset:
578
+ name: XCOPA (zh)
579
+ type: xcopa
580
+ config: zh
581
+ split: validation
582
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
583
+ metrics:
584
+ - type: Accuracy
585
+ value: 87
586
+ - task:
587
+ type: Sentence completion
588
+ dataset:
589
+ name: XStoryCloze (ar)
590
+ type: Muennighoff/xstory_cloze
591
+ config: ar
592
+ split: validation
593
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
594
+ metrics:
595
+ - type: Accuracy
596
+ value: 91
597
+ - task:
598
+ type: Sentence completion
599
+ dataset:
600
+ name: XStoryCloze (es)
601
+ type: Muennighoff/xstory_cloze
602
+ config: es
603
+ split: validation
604
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
605
+ metrics:
606
+ - type: Accuracy
607
+ value: 93.38
608
+ - task:
609
+ type: Sentence completion
610
+ dataset:
611
+ name: XStoryCloze (eu)
612
+ type: Muennighoff/xstory_cloze
613
+ config: eu
614
+ split: validation
615
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
616
+ metrics:
617
+ - type: Accuracy
618
+ value: 91.13
619
+ - task:
620
+ type: Sentence completion
621
+ dataset:
622
+ name: XStoryCloze (hi)
623
+ type: Muennighoff/xstory_cloze
624
+ config: hi
625
+ split: validation
626
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
627
+ metrics:
628
+ - type: Accuracy
629
+ value: 90.73
630
+ - task:
631
+ type: Sentence completion
632
+ dataset:
633
+ name: XStoryCloze (id)
634
+ type: Muennighoff/xstory_cloze
635
+ config: id
636
+ split: validation
637
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
638
+ metrics:
639
+ - type: Accuracy
640
+ value: 93.05
641
+ - task:
642
+ type: Sentence completion
643
+ dataset:
644
+ name: XStoryCloze (my)
645
+ type: Muennighoff/xstory_cloze
646
+ config: my
647
+ split: validation
648
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
649
+ metrics:
650
+ - type: Accuracy
651
+ value: 86.7
652
+ - task:
653
+ type: Sentence completion
654
+ dataset:
655
+ name: XStoryCloze (ru)
656
+ type: Muennighoff/xstory_cloze
657
+ config: ru
658
+ split: validation
659
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
660
+ metrics:
661
+ - type: Accuracy
662
+ value: 91.66
663
+ - task:
664
+ type: Sentence completion
665
+ dataset:
666
+ name: XStoryCloze (sw)
667
+ type: Muennighoff/xstory_cloze
668
+ config: sw
669
+ split: validation
670
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
671
+ metrics:
672
+ - type: Accuracy
673
+ value: 89.61
674
+ - task:
675
+ type: Sentence completion
676
+ dataset:
677
+ name: XStoryCloze (te)
678
+ type: Muennighoff/xstory_cloze
679
+ config: te
680
+ split: validation
681
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
682
+ metrics:
683
+ - type: Accuracy
684
+ value: 90.4
685
+ - task:
686
+ type: Sentence completion
687
+ dataset:
688
+ name: XStoryCloze (zh)
689
+ type: Muennighoff/xstory_cloze
690
+ config: zh
691
+ split: validation
692
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
693
+ metrics:
694
+ - type: Accuracy
695
+ value: 93.05
696
+ ---
697
+
698
+ # Markobes/mt0-xxl-mt-Q4_K_M-GGUF
699
+ This model was converted to GGUF format from [`bigscience/mt0-xxl-mt`](https://huggingface.co/bigscience/mt0-xxl-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
700
+ Refer to the [original model card](https://huggingface.co/bigscience/mt0-xxl-mt) for more details on the model.
701
+
702
+ ## Use with llama.cpp
703
+ Install llama.cpp through brew (works on Mac and Linux)
704
+
705
+ ```bash
706
+ brew install llama.cpp
707
+
708
+ ```
709
+ Invoke the llama.cpp server or the CLI.
710
+
711
+ ### CLI:
712
+ ```bash
713
+ llama-cli --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -p "The meaning to life and the universe is"
714
+ ```
715
+
716
+ ### Server:
717
+ ```bash
718
+ llama-server --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -c 2048
719
+ ```
720
+
721
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
722
+
723
+ Step 1: Clone llama.cpp from GitHub.
724
+ ```
725
+ git clone https://github.com/ggerganov/llama.cpp
726
+ ```
727
+
728
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
729
+ ```
730
+ cd llama.cpp && LLAMA_CURL=1 make
731
+ ```
732
+
733
+ Step 3: Run inference through the main binary.
734
+ ```
735
+ ./llama-cli --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -p "The meaning to life and the universe is"
736
+ ```
737
+ or
738
+ ```
739
+ ./llama-server --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -c 2048
740
+ ```