File size: 3,854 Bytes
28b47db
 
97d8bfd
68e2624
 
8fb3184
28b47db
 
 
 
 
 
 
 
 
 
 
 
332e50b
 
97d8bfd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
993787d
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
GGUF Quants with iMatrix for : https://huggingface.co/ShinojiResearch/Senku-70B-Full

Q3_K_M, IQ3_XXS, Q2_K, Q2_K_S and Q3_K_S are provided here.

But for IQ2_XS and IQ2_XXS, it's there : https://huggingface.co/dranger003/Senku-70B-iMat.GGUF

LlamaCPP Benchs :

- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Hellaswag,84.5,,400,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Hellaswag,83.3,,1000,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Arc-Challenge,59.19732441,,299,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Arc-Easy,77.89473684,,570,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,MMLU,49.52076677,,313,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Thruthful-QA,38.92288862,,817,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Winogrande,78.4530,,1267,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,wikitext,4.3440,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,81
- Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,wikitext,3.8722,512,512,2024-02-07 00:00:00,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,655

The Hellaswag scores might be 5-6 points higher, due to some recent changes in LlamaCPP.

Senku is dominant on Arc-Challenge among Miqu based models, providing a read bump from the baseline Miqu.

A reflection of its EQ-Bench, highest to date (7/02/2024) among the 70b models?

On the other hand, the TQA suffers quite a bit.

Here comes the benchs of its toughest competitor to my knowledge, at equal quant except for the number of chunks of the iMatrix :

- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Hellaswag,84.5,,400,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Hellaswag,83.6,,1000,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Arc-Challenge,58.52842809,,299,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Arc-Easy,77.36842105,,570,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,MMLU,49.84025559,,313,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Thruthful-QA,42.83965728,,817,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Winogrande,78.7687,,1267,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,wikitext,4.2963,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,81
- Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,wikitext,3.8397,512,512,2024-02-07 00:00:00,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,655

I think that both these models deserve a 5 millions tokens iMatrix (512ctx, 10,000 chunks, on wiki.train.raw).

And why not, a combination of such iMatrixes from different major languages (English, French, German, Spanish at least, etc..)

Alas, I can't provide this for now.