|
This is a GGUF quant of https://huggingface.co/migtissera/Synthia-7B-v1.3 |
|
|
|
If you want to support me, you can [here](https://ko-fi.com/undiai). |
|
|
|
# Synthia v1.3 |
|
|
|
SynthIA (Synthetic Intelligent Agent) v1.3 is a Mistral-7B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. |
|
|
|
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: |
|
|
|
`Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.` |
|
|
|
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. |
|
|
|
## Training Details |
|
This was trained with QLoRA, as with all my models. Learning rate was 3e-4, 4096 context length. Batch size was 64, trained on a single H100. |
|
|
|
Synthia-v1.2 dataset, which contain Chain-of-Thought (Orca), Tree-of-Thought and Long-Form conversation data. |
|
|
|
Dataset is super high quality, and not a massive dataset (about ~125K samples). |
|
|
|
## License Disclaimer: |
|
|
|
This model is bound by the license & usage restrictions of the original Mistral model, and comes with no warranty or guarantees of any kind. |
|
|
|
|