starble-dev's picture
Update README.md
e97a200 verified
|
raw
history blame
1.77 kB
metadata
license: apache-2.0
tags:
  - mistral
  - conversational
  - text-generation-inference
base_model: BeaverAI/mistral-doryV2-12b
library_name: transformers

Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this.

Original Model:
BeaverAI/mistral-doryV2-12b

How to Use: llama.cpp

License:
Apache 2.0

Quants