Description
This repo contains GGUF files for the original model.
Files
- Chronos-Mistral-7b_Q2_K.gguf (2.72 GB) - smallest, significant quality loss - not recommended for most purposes
- Chronos-Mistral-7b_Q3_K_S.gguf (3.16 GB) - very small, high quality loss
- Chronos-Mistral-7b_Q3_K_M.gguf (3.52 GB) - very small, high quality loss
- Chronos-Mistral-7b_Q3_K_L.gguf (3.82 GB) - small, substantial quality loss
- Chronos-Mistral-7b_Q4_K_S.gguf (4.14 GB) - small, greater quality loss
- Chronos-Mistral-7b_Q4_K_M.gguf (4.37 GB) - medium, balanced quality - recommended
- Chronos-Mistral-7b_Q5_K_S.gguf (5 GB) - large, low quality loss - recommended
- Chronos-Mistral-7b_Q5_K_M.gguf (5.13 GB) - large, very low quality loss - recommended
- Chronos-Mistral-7b_Q6_K.gguf (5.94 GB) - very large, extremely low quality loss
- Chronos-Mistral-7b_Q8_0.gguf (7.7 GB) - very large, extremely low quality loss - not recommended
Original model description
This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens
Up to 16384 with RoPE with solid coherency.
This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE instruction mode:
### Instruction:
{Your instruction or question here.}
### Response:
Not using the format will make the model perform significantly worse than intended unless it is merged.
- Downloads last month
- 4
Model tree for RikudouSage/Chronos-Mistral-7B-GGUF
Base model
elinas/chronos-mistral-7b