--- license: apache-2.0 --- ## MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization see our paper in https://arxiv.org/abs/2401.06838 View the Github Project: https://github.com/NJUNLP/MAPO ## Benchmarks | System | [MSVAMP](https://huggingface.co/datasets/Mathoctopus/MSVAMP) | [MGSM](https://huggingface.co/datasets/juletxara/mgsm) | [MNumGLUESub](https://huggingface.co/datasets/Mathoctopus/MSVAMP) | | ------ | :-------------------------------------------------------------: | :------------------------------------------------------: | :-------------------------------------------------------------: | | GPT-3.5-Turbo | 46.6 | 42.2 | 49.4 | | [MAmmoTH 7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 26.3 | 21.3 | 24.2 | | [WizardMath 7B](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) | 32.5 | 23.0 | 28.7 | | [MetaMath 7B](https://huggingface.co/meta-math/MetaMath-7B-V1.0) | 46.2 | 37.0 | 43.2 | | [QAlign 7B](https://huggingface.co/Wenhao97/QAlign-MetaMathQA-7B) | 57.2 | 49.6 | - | | [MathOctopus 7B](https://huggingface.co/Mathoctopus/Parallel_7B) | 41.2 | 39.5 | 37.1 | | **[+ MAPO-DPO(ours)🔥](https://huggingface.co/kevinpro/MathOctopus-MAPO-DPO-7B)** | **57.4** | **41.6** | **50.4** | | [MetaMathOctopus 7B](https://huggingface.co/kevinpro/MetaMathOctopus-7B) | 53.0 | 45.5 | 39.2 | | **[+ MAPO-DPO(ours) 👑](https://huggingface.co/kevinpro/MetaMathOctopus-MAPO-DPO-7B)** | **64.7** | **51.6** | **52.9** | | MistralMathOctopus 7B | 59.0 | 58.0 | 56.8 | | **+ MAPO-DPO(ours) 👑** | **74.6** | **67.3** | **70.0** | | System | [MSVAMP](https://huggingface.co/datasets/Mathoctopus/MSVAMP) | [MGSM](https://huggingface.co/datasets/juletxara/mgsm) | [MNumGLUESub](https://huggingface.co/datasets/Mathoctopus/MSVAMP) | | ------ | :-------------------------------------------------------------: | :------------------------------------------------------: | :-------------------------------------------------------------: | | GPT-3.5-Turbo | 46.6 | 42.2 | 49.4 | | [MAmmoTH 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | 38.6 | 28.9 | 29.5 | | [WizardMath 13B](https://huggingface.co/WizardLM/WizardMath-13B-V1.1) | 35.7 | 28.3 | 29.0 | | [MetaMath 13B](https://huggingface.co/meta-math/MetaMath-13B-V1.0) | 46.2 | 43.9 | 43.3 | | [QAlign 13B](https://huggingface.co/Wenhao97/QAlign-MetaMathQA-13B) | 62.6 | 57.1 | - | | [MathOctopus 13B](https://huggingface.co/Mathoctopus/Parallel_13B) | 51.8 | 46.0 | 40.3 | | **[+ MAPO-DPO(ours)🔥](https://huggingface.co/kevinpro/MathOctopus-MAPO-DPO-13B)** | **60.1** | **48.5** | **53.8** | | [MetaMathOctopus 13B](https://huggingface.co/kevinpro/MetaMathOctopus-13B) | 56.3 | 51.4 | 49.5 | | **[+ MAPO-DPO(ours) 👑](https://huggingface.co/kevinpro/MetaMathOctopus-MAPO-DPO-13B)** | **67.0** | **58.0** | **59.8** | ## Citation If you find this model helpful, feel free to cite our paper: ``` @misc{she2024mapo, title={MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization}, author={Shuaijie She and Wei Zou and Shujian Huang and Wenhao Zhu and Xiang Liu and Xiang Geng and Jiajun Chen}, year={2024}, eprint={2401.06838}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```