QuantFactory/Biomistral-Calme-Instruct-7b-GGUF

This is quantized version of arcee-ai/Biomistral-Calme-Instruct-7b created using llama.cpp

Original Model Card

Biomistral-Calme-Instruct-7b

Biomistral-Calme-Instruct-7b is a merge of the following models using mergekit:

🧩 Configuration

  slices:
    - sources:
        - model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
          layer_range: [0, 32]
        - model: BioMistral/BioMistral-7B
          layer_range: [0, 32]
  merge_method: slerp
  base_model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
  parameters:
    t:
      - filter: self_attn
        value: [0, 0.5, 0.3, 0.7, 1]
      - filter: mlp
        value: [1, 0.5, 0.7, 0.3, 0]
      - value: 0.5
  dtype: bfloat16
Downloads last month
236
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .