Model,Accuracy Qwen2-7B-Instruct,0.672506256703611 Meta-Llama-3.1-8B-Instruct,0.6037182695745441 Gemma-2-9b-it-sg-ultrachat-sft,0.7028244547729711 Qwen2_5_32B_Instruct,0.7996424740793707 Qwen2_5_7B_Instruct,0.6935287808366106 Qwen2_5_1_5B_Instruct,0.5646764390418305 Qwen2-72B-Instruct,0.7922774401144083 MERALiON-LLaMA-3-8B-Chat,0.5870575616732213 Meta-Llama-3-8B-Instruct,0.6005720414730068 Meta-Llama-3.1-70B-Instruct,0.8058634250983197 Qwen2_5_3B_Instruct,0.6118698605648909 SeaLLMs-v3-7B-Chat,0.6670003575259207 Qwen2_5_72B_Instruct,0.8129424383267787 gemma-2-9b-it,0.7100464783696818 Meta-Llama-3-70B-Instruct,0.7649624597783339 Qwen2_5_14B_Instruct,0.7542366821594566 gemma-2-2b-it,0.5706828745084018 llama3-8b-cpt-sea-lionv2-instruct,0.6130854486950303 Qwen2_5_0_5B_Instruct,0.461136932427601 GPT4o_0513,0.8308187343582409