Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,15 @@ W_{mistral} + LoRA_{hermes} = W_{hermes} \\
|
|
20 |
W_{hermes} - LoRA_{hermes} = W_{mistral}
|
21 |
$$
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
<!--
|
24 |
$$ W_{mistral} + LoRA_{zephyr} = W_{zephyr} $$
|
25 |
```
|
|
|
20 |
W_{hermes} - LoRA_{hermes} = W_{mistral}
|
21 |
$$
|
22 |
|
23 |
+
|
24 |
+
### Why Though?
|
25 |
+
unfortunately this is not as simple as [typeof/zephyr-7b-beta-lora](https://huggingface.co/typeof/zephyr-7b-beta-lora)
|
26 |
+
due to the way in which [OpenHermes](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) was trained...
|
27 |
+
by adding tokens, the corresponance is not 1-to-1 with [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
28 |
+
as is the case with [typeof/zephyr-7b-beta-lora](https://huggingface.co/typeof/zephyr-7b-beta-lora) ...
|
29 |
+
nevertheless, if you have found yourself here, I'm sure you can figure out how to use it... if not, open up an issue!
|
30 |
+
|
31 |
+
|
32 |
<!--
|
33 |
$$ W_{mistral} + LoRA_{zephyr} = W_{zephyr} $$
|
34 |
```
|