Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1
GGUF: https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF
Merge with the following conditions
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 8]
- model: Intel/neural-chat-7b-v3-1
layer_range: [4, 12]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [9, 16]
- model: Intel/neural-chat-7b-v3-1
layer_range: [13, 20]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [17, 24]
- model: Intel/neural-chat-7b-v3-1
layer_range: [21, 28]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [25, 32]
merge_method: passthrough
Benchmarks are coming soon...
- Downloads last month
- 947
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.