Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,10 @@ base_model:
|
|
10 |
|
11 |
Update: After further testing, this has turned out exactly like I wanted, and is one of my favorite models! It remains coherant at higher contexts and doesn't suffer the repetition issues I was having with Lumimaid.
|
12 |
|
|
|
|
|
|
|
|
|
13 |
# SmartMaid-123b
|
14 |
|
15 |
This **experimental model** is a hybrid creation combining aspects of [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) and [Lumimaid-v0.2-123B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B) using LoRA (Low-Rank Adaptation) on the `mlp.down_proj` module.
|
|
|
10 |
|
11 |
Update: After further testing, this has turned out exactly like I wanted, and is one of my favorite models! It remains coherant at higher contexts and doesn't suffer the repetition issues I was having with Lumimaid.
|
12 |
|
13 |
+
# EXL2 Quants of SmartMaid-123b
|
14 |
+
|
15 |
+
[4.5bpw](https://huggingface.co/gghfez/SmartMaid-123b-exl2/tree/4.5bpw)
|
16 |
+
|
17 |
# SmartMaid-123b
|
18 |
|
19 |
This **experimental model** is a hybrid creation combining aspects of [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) and [Lumimaid-v0.2-123B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B) using LoRA (Low-Rank Adaptation) on the `mlp.down_proj` module.
|