Edit model card

Marcoro14-7B-slerp

Marcoro14-7B-slerp is a merge of the following models using mergekit:

🧩 Configuration

  models:
    - model: meta-llama/Meta-Llama-3-8B
      # no parameters necessary for base model
    - model: mistralai/Mistral-7B-Instruct-v0.1
      parameters:
        density: 0.5
        weight: 0.5
  merge_method: ties
  base_model: meta-llama/Meta-Llama-3-8B
  parameters:
    normalize: true
  dtype: float16
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .