File size: 1,325 Bytes
fbe6046
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
---
This is a finetuned base model from [OpenHermes-2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) for the trained medusa head [OpenHermes-2.5-medusa](omarelshehy/OpenHermes-2.5-Mistral-7B-medusa)

The base model and the medusa heads were trained together, therefore ideally should be used together for the best performance.

WIP: Replace the model with an adapter to the original model

# Training Details

The model and the heads were trained using a self-distilled dataset inferred from the original dataset used for training https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B

The inference on the dataset was done using [vLLM](https://docs.vllm.ai/en/latest/index.html) async server on a A100. 

The training was performed with the help of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on a single A100 GPU using qLora for 2 epochs

# Inference evaluation
(This is still a WIP)
I tested the model's latency performance using [TGI](https://huggingface.co/docs/text-generation-inference/en/index). As reported by several people the model's performance depends on the domain or task. Generally speaking however i measured 1.9x improvement in latency. With code related tasks however, the latency can reach 3x improvement.