license: apache-2.0
tags:
- merge
- mergekit
- Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged
- Ahmad0067/llama-3-8b-Instruct-Bloodwork_Specialist_Synth_data_Phase_1_and_2_corect_unsloth_merged
- Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged
---

# llama-3-8b-Instruct-DARE_TIES_merged-ref-blood-pres

llama-3-8b-Instruct-DARE_TIES_merged-ref-blood-pres is a merge of the following models using [mergekit](https://github.com/arcee-ai/mergekit):
* [Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged)
* [Ahmad0067/llama-3-8b-Instruct-Bloodwork_Specialist_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Bloodwork_Specialist_Synth_data_Phase_1_and_2_corect_unsloth_merged)
* [Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged)

## 🧩 Configuration

```yaml

models:

  • model: Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged parameters: density: 0.33 weight: 0.33
  • model: Ahmad0067/llama-3-8b-Instruct-Bloodwork_Specialist_Synth_data_Phase_1_and_2_corect_unsloth_merged parameters: density: 0.33 weight: 0.33
  • model: Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged parameters: density: 0.34 weight: 0.34 merge_method: dare_ties base_model: unsloth/llama-3-8b-Instruct parameters: normalize: true int8_mask: true dtype: float16

Downloads last month
6
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.