--- base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 language: - en library_name: transformers license: other license_link: https://mistral.ai/licenses/MRL-0.1.md license_name: mrl quantized_by: SvdH base_model_relation: quantized --- # Mistral-Small-22B-ArliAI-RPMax-v1.1-EXL2-6BPW ===================================== 6BPW ExLLamaV2 quant of https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 ## RPMax Series Overview | [2B](https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1) | [3.8B](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) | [8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) | [9B](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1) | [12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) | [20B](https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1) | [22B](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) | [70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1) | RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations. Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred. You can access the models at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/ We also have a models ranking page at https://www.arliai.com/models-ranking Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk ## Model Description Mistral-Small-22B-ArliAI-RPMax-v1.1 is a variant based on mistralai/Mistral-Small-Instruct-2409 which has a restrictive mistral license. So this is for your personal use only. Context Length: 32768 ### Training Details * **Sequence Length**: 8192 * **Training Duration**: Approximately 4 days on 2x3090Ti * **Epochs**: 1 epoch training for minimized repetition sickness * **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights * **Learning Rate**: 0.00001 * **Gradient accumulation**: Very low 32 for better learning. ## Quantization The model is available in quantized formats: * **FP16**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 * **GPTQ_Q4**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1-GPTQ_Q4 * **GPTQ_Q8**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1-GPTQ_Q8 * **GGUF**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1-GGUF ## Suggested Prompt Format Mistral Instruct Format