Mistral-24B-Reasoning-zhTW
This model is a fine-tuned version of mistralai/Mistral-Small-24B-Instruct-2501, specifically optimized for mathematical reasoning tasks and enhanced for Traditional Chinese (zh-Hant-TW) language support.
Model Details
Model Description
- Developed by: Yenting Lin
- Funded by: Ubitus
- Model type: Instruction-tuned language model for reasoning
- Language(s) (NLP): English (en), Traditional Chinese (zh-Hant-TW)
- Finetuned from model: mistralai/Mistral-Small-24B-Instruct-2501
Training Details
The model was trained using 4ร8 H100 GPUs, provided by Ubitus.
- Downloads last month
- 74
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for ubitus/Mistral-24B-Reasoning-zhTW
Base model
mistralai/Mistral-Small-24B-Base-2501