Datasets:

Modalities:
Text
Formats:
json
Languages:
Thai
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
mt-bench-thai / README.md
patomp's picture
Update README.md
8920bde verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - th
size_categories:
  - n<1K

MT-Bench Thai

MT-Bench Thai is a dataset for multi-turn benchmarking that covers 9 categories.

  1. Writing
  2. Roleplay
  3. Extraction
  4. Reasoning
  5. Math
  6. Coding
  7. STEM
  8. Social Science
  9. Knowledge III

We introduce the final category, Knowledge III, which evaluates understanding of Thai cultural context.

Dataset Loading

from datasets import load_dataset
ds = load_dataset("ThaiLLM-Leaderboard/mt-bench-thai")
print(ds)

output

DatasetDict({
    train: Dataset({
        features: ['question_id', 'turns', 'reference', 'generation_kwargs', 'category'],
        num_rows: 91
    })
})

A sample

ds["train"][0]

output

{
    "question_id": 0,
    "turns": [
        "จงเติมบทประพันธ์จากสุทรภู่นี้ให้ครบ “แล้วสอนว่าอย่าไว้ใจมนุษย์”",
        "บทประพันธ์นี้มาจากเรื่องใด"
    ],
    "reference": [
        "“แล้วสอนว่าอย่าไว้ใจมนุษย์ \nมันแสนสุดลึกล้ำเหลือกำหนด\nถึงเถาวัลย์พันเกี่ยวที่เลี้ยวลด \nก็ไม่คดเหมือนหนึ่งในน้ำใจคน”",
        "พระอภัยมณี"
    ],
    "generation_kwargs": {
        "temperature": 0.1
    },
    "category": "Knowledge III"
}

Dataset Construction

Guideline

  1. One annotator will be assigned to one category at a time.
  2. Based on [1], a sample can be translated from English to Thai unless it meets one of the following conditions: (1) The sample contains localization-specific content or entities well-understood by native speakers, e.g., a roleplay involving Elon Musk; or (2) The question is too complex, such as one involving advanced quantum mechanics.
  3. If a sample meets either condition, we will rewrite the questions on similar topics to localize or simplify them.

Constrains

  1. The question can be understood for a Thai-native high school student in STEM.
  2. The length of the questions should follow the number of sentences from the MT-Bench source, and it should not differ significantly.
  3. Annotators should also provide a response for both turns in the case when the question is closed-end.
  4. Questions that introduce demographic biases, subjective beliefs, or ethical concerns are unacceptable.

Annotators: Patomporn Payoungkhamdee, Peerat Limkonchotiwat, Wannaphong Phatthiyaphaibun, Surapon Nonesung, Chalermpun Mai-On, Lalita Lowphansirikul, and Parinthapat Pengpun.

Acknowledgement

We would like to thank the WangchanX project for providing resource support. We greatly appreciate Wei Qi Leong from AI Singapore for his valuable advice and review. Lastly, we would like to thank SCB10X for the leaderboard hosting.

Citation

[1] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Retrieved from https://arxiv.org/abs/2306.05685