Senku-70b-iMat.GGUF / README.md
Nexesenex's picture
Update README.md
993787d verified
|
raw
history blame
3.85 kB

GGUF Quants with iMatrix for : https://huggingface.co/ShinojiResearch/Senku-70B-Full

Q3_K_M, IQ3_XXS, Q2_K, Q2_K_S and Q3_K_S are provided here.

But for IQ2_XS and IQ2_XXS, it's there : https://huggingface.co/dranger003/Senku-70B-iMat.GGUF

LlamaCPP Benchs :

  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Hellaswag,84.5,,400,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Hellaswag,83.3,,1000,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Arc-Challenge,59.19732441,,299,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Arc-Easy,77.89473684,,570,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,MMLU,49.52076677,,313,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Thruthful-QA,38.92288862,,817,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,Winogrande,78.4530,,1267,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,wikitext,4.3440,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,81
  • Senku-70b-b2081-iMat-c32_ch300-Q3_K_M.gguf,-,wikitext,3.8722,512,512,2024-02-07 00:00:00,70b,Mistral_Medium,32768,,,GGUF,ShinojiResearch,Nexesenex,655

The Hellaswag scores might be 5-6 points higher, due to some recent changes in LlamaCPP.

Senku is dominant on Arc-Challenge among Miqu based models, providing a read bump from the baseline Miqu.

A reflection of its EQ-Bench, highest to date (7/02/2024) among the 70b models?

On the other hand, the TQA suffers quite a bit.

Here comes the benchs of its toughest competitor to my knowledge, at equal quant except for the number of chunks of the iMatrix :

  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Hellaswag,84.5,,400,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Hellaswag,83.6,,1000,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Arc-Challenge,58.52842809,,299,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Arc-Easy,77.36842105,,570,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,MMLU,49.84025559,,313,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Thruthful-QA,42.83965728,,817,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Winogrande,78.7687,,1267,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,wikitext,4.2963,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,81
  • Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,wikitext,3.8397,512,512,2024-02-07 00:00:00,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,655

I think that both these models deserve a 5 millions tokens iMatrix (512ctx, 10,000 chunks, on wiki.train.raw).

And why not, a combination of such iMatrixes from different major languages (English, French, German, Spanish at least, etc..)

Alas, I can't provide this for now.