--- language: - am - ar - bn - zh - cs - nl - en - fr - de - el - ha - he - hi - id - it - ja - jv - km - ko - lo - ms - mr - fa - pl - pt - ro - ru - es - sw - sv - tl - ta - te - th - tr - uk - ur - vi license: apache-2.0 datasets: - lightblue/reasoning-multilingual-R1-Llama-70B-train tags: - reasoning --- # lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual
R1   m u l t i l i n g
This is a Deepseek distill finetune trained on multilingual Chain-of-Thought (CoT). When this model is prompted in a language, it will both think and respond in that language, unlike the original R1 which will often think in either Chinese or English. This will make the outputs of these AIs more understandable and explainable to a wider audience. Hopefully this will be useful to the AI community, particularly those developing for languages aside from English and Chinese. This model is a multilingual fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). Other fine-tuned versions of this model can be found in [our collection, here](https://huggingface.co/collections/lightblue/r1-multilingual-679c890166ac0a84e83e38fa). This model was trained was trained using our [lightblue/reasoning-multilingual-R1-Llama-70B-train](https://huggingface.co/datasets/lightblue/reasoning-multilingual-R1-Llama-70B-train) dataset for ~10 minutes on the 8 x L20 instance ([ecs.gn8is-8x.32xlarge](https://www.alibabacloud.com/help/en/ecs/user-guide/gpu-accelerated-compute-optimized-and-vgpu-accelerated-instance-families-1)) on [Alibaba Cloud](https://www.alibabacloud.com/). # How to use When using these models, we recommend using a sampling temperature of between 0.5-0.7, [as per the original distilled R1 models](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#usage-recommendations). Additionally, we have observed that the model sometimes tends to repeat for more niche languages, so we also recommend setting `repetition_penalty` to 1.1, or higher if the model repeats itself when processing your prompts. We include scripts to use this model in vLLM: # Evaluation Through some quick evaluation of our own, we found this model can produce much correctly formatted and accurate results for higher resource languages, such as Japanese, English, German, than lower resource languages, such as Amharic or Lao. We did a **very** quick evaluation of 5 questions with each dataset (written by me and translated by GPT4o Mini) on the [lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) model, and we find that the model is able to fairly reliably output the correct answers and in the correct language for a large variety of languages: For this evaluation, a score of >=0.8 is good, as one of the questions was very hard. The language detection was done using [pycld2](https://pypi.org/project/pycld2/) so errors may occur with the correct language being mistaken for another one. | language | Has a correct think statement | Has the think statement in the correct language | Is the response in the correct language | Is the answer correct | |:----------------|------------:|------------------------:|----------------------:|-------------:| | Amharic | 0.2 | 0 | 0 | 0 | | Arabic | 1 | 0.8 | 0.8 | 0.6 | | Bengali | 1 | 1 | 1 | 0.2 | | Chinese | 1 | 1 | 1 | 0.8 | | Czech | 1 | 1 | 1 | 0.8 | | Dutch | 1 | 1 | 1 | 0.8 | | English | 1 | 1 | 1 | 0.8 | | French | 1 | 1 | 1 | 0.8 | | German | 1 | 1 | 1 | 0.8 | | Greek | 1 | 1 | 1 | 0.6 | | Hausa | 0.4 | 0 | 0 | 0 | | Hebrew | 1 | 0.8 | 1 | 0.6 | | Hindi | 1 | 1 | 1 | 0.8 | | Indonesian | 1 | 1 | 1 | 0.8 | | Italian | 1 | 1 | 1 | 0.8 | | Japanese | 1 | 1 | 0.8 | 0.6 | | Javanese | 0.8 | 0.2 | 0.2 | 0.6 | | Khmer | 0.6 | 0.6 | 0.6 | 0 | | Korean | 1 | 1 | 1 | 1 | | Lao | 0.4 | 0.4 | 0.4 | 0 | | Malay | 1 | 0.4 | 0.4 | 0.8 | | Marathi | 0.6 | 0.4 | 0.6 | 0.2 | | Persian (Farsi) | 0.6 | None* | None* | 0.2 | | Polish | 1 | 1 | 1 | 0.6 | | Portuguese | 1 | 1 | 1 | 0.8 | | Romanian | 1 | 1 | 1 | 0.8 | | Russian | 1 | 1 | 1 | 0.8 | | Spanish | 1 | 1 | 1 | 0.8 | | Swahili | 0.4 | 0.4 | 0.4 | 0 | | Swedish | 1 | 1 | 1 | 0.8 | | Tagalog | 1 | 1 | 1 | 0.8 | | Tamil | 0.8 | 0.8 | 0.8 | 0.2 | | Telugu | 0.8 | 0.6 | 0.8 | 0 | | Thai | 1 | 1 | 1 | 0.8 | | Turkish | 1 | 1 | 1 | 0.8 | | Ukrainian | 1 | 1 | 1 | 0.8 | | Urdu | 1 | 1 | 1 | 0.6 | | Vietnamese | 1 | 1 | 1 | 1 | * There was an error with Farsi detection (my own fault) so we do not report Farsi scores. The evaluation code for this can be found [here](https://drive.google.com/file/d/1P33GpqvKmHoZUsWqqBPXHTToN2W7MDRG/view?usp=sharing). # Training code ```yaml ### model model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B ### method stage: sft do_train: true finetuning_type: full deepspeed: /root/LLaMA-Factory/examples/deepspeed/ds_z2_config.json ### dataset dataset: reasoning-multilingual-R1-Llama-70B-train template: qwen cutoff_len: 4500 overwrite_cache: true preprocessing_num_workers: 16 packing: true ### output output_dir: /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/reasoning-multilingual-R1-Llama-70B-train logging_steps: 1 save_steps: 0.99999 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 1 learning_rate: 1.0e-5 num_train_epochs: 1.0 lr_scheduler_type: cosine warmup_ratio: 0.01 bf16: true ddp_timeout: 180000000 ### eval val_size: 0.01 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 0.1 ``` ```bash echo '{ "reasoning-multilingual-R1-Llama-70B-train": { "hf_hub_url": "lightblue/reasoning-multilingual-R1-Llama-70B-train", "formatting": "sharegpt" } }' > /root/LLaMA-Factory/data/dataset_info.json # 7B Qwen cd /root/LLaMA-Factory && llamafactory-cli train /root/reasoning_multilingual_train_7B.yaml rm -r /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/reasoning-multilingual-R1-Llama-70B-train/checkpoint* huggingface-cli upload lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/reasoning-multilingual-R1-Llama-70B-train ``` # License We share this model with the Apache 2.0 license. # Developed by Lightblue technology logo This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue