[Request #59 โ€“ Click here for more context.]

Request description:
"An experimental model that turned really well. Scores high on Chai leaderboard (slerp8bv2 there). Feel smarter than average L3 merges for RP."

Model page:
R136a1/Bungo-L3-8B

Use with the latest version of KoboldCpp, or this alternative fork if you have issues.

Click here to expand/hide information:
โ‡ฒ General chart with relative quant performance.

Recommended read:

"Which GGUF is right for me? (Opinionated)" by Artefact2

Click the image to view full size. "Which GGUF is right for me? (Opinionated)" by Artefact2 - First Graph

image/webp

Downloads last month
134
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for Lewdiculous/Bungo-L3-8B-GGUF-IQ-Imatrix-Request

Base model

R136a1/Bungo-L3-8B
Quantized
(4)
this model