Model,Accuracy Qwen2-7B-Instruct,0.5385539755657921 Meta-Llama-3.1-8B-Instruct,0.5252687095266707 llama3-8b-cpt-sea-lionv2.1-instruct,0.5269377127979171 Gemma-2-9b-it-sg-ultrachat-sft,0.6051805861539489 Qwen2_5_32B_Instruct,0.6314840777087923 Qwen2_5_7B_Instruct,0.5600507376994459 Qwen2_5_1_5B_Instruct,0.4295346818879765 Qwen2-72B-Instruct,0.6385606515788771 MERALiON-LLaMA-3-8B-Chat,0.5533747246144602 Meta-Llama-3-8B-Instruct,0.5264703918819681 Meta-Llama-3.1-70B-Instruct,0.6740770411910008 Qwen2_5_3B_Instruct,0.49656185326123237 SeaLLMs-v3-7B-Chat,0.5267374324053675 Qwen2_5_72B_Instruct,0.6380933306629281 gemma-2-9b-it,0.606983109686895 Meta-Llama-3-70B-Instruct,0.6323519594098405 Qwen2_5_14B_Instruct,0.6009746979104079 gemma2-9b-cpt-sea-lionv3-instruct,0.6196007744175178 gemma-2-2b-it,0.48220842512851325 llama3-8b-cpt-sea-lionv2-instruct,0.5252687095266707 Qwen2_5_0_5B_Instruct,0.3279925228653448 GPT4o_0513,0.7584618465852193