KSI-RPG-128k-7B-GGUF ⭐️⭐️⭐️

KSI-RPG-128k-7B is a merge of the following models using mergekit:

🧩 Configuration

slices:
  - sources:
      - model: AlekseiPravdin/KSI-RP-NSK-128k-7B
        layer_range: [0, 32]
      - model: flammenai/flammen18X-mistral-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: AlekseiPravdin/KSI-RP-NSK-128k-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

Eval embedding benchmark (with 70 specific quesions):

inf.jpg md28g.jpg SK.jpg ks-inf.jpg command-r.jpg NSK.jpg NSMv2.jpg aura.jpg ivanDrogo.jpg KSI.jpg KSI-RPG.jpg llama3.jpg KSIF.jpg d29l38.jpg

Downloads last month
137
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including AlekseiPravdin/KSI-RPG-128k-7B-gguf