File size: 2,234 Bytes
2950dad c59deec 2950dad 02a8543 c59deec 8e4125e c59deec 2950dad bcb41ff 2950dad c59deec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
language:
- eng
pretty_name: LMSYS Chatbot Arena ELO Scores
license:
- apache-2.0
tags:
- lmsys
- chatbot
- arena
- elo
---
# LMSYS Chatbot Arena ELO Scores
This dataset is a `datasets`-friendly version of Chatbot Arena ELO scores,
updated daily from the leaderboard API at
https://huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard.
**Updated: 20241028**
## Loading Data
```python
from datasets import load_dataset
dataset = load_dataset("mathewhe/chatbot-arena-elo", split="train")
```
The main branch of this dataset will always be updated to the latest ELO and
leaderboard version. If you need a fixed dataset that does not change, please
specify a date tag when loading the dataset:
```python
from datsasets import load_dataset
# Load the leaderboard from October 24, 2024
dataset = load_dataset("mathewhe/chatbot-arena-elo", split="train", revision="20241024")
```
Tags are only created when the leaderboard is updated. See below for a list of
recent tags.
```
20241028
20241024
20241023
20241020
20241014
```
## Dataset Structure
Example instance:
```json
{
"Rank* (UB)": 1,
"Model Markup": "<a target=""_blank"" href=""https://help.openai.com/en/articles/9624314-model-release-notes"" style=""color: var(--link-text-color); text-decoration: underline;text-decoration-style: dotted;"">ChatGPT-4o-latest (2024-09-03)</a>"
"Model": "ChatGPT-4o-latest (2024-09-03)",
"Arena Score": 1338,
"95% CI": "+3/-5",
"Votes": 24135,
"Organization": "OpenAI",
"License": "Proprietary",
"Knowledge Cutoff": "2023/10"
}
```
### Citation Information
To cite the ELO leaderboard, please use the original citation:
```bitex
@misc{chiang2024chatbot,
title={Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference},
author={Wei-Lin Chiang and Lianmin Zheng and Ying Sheng and Anastasios Nikolas Angelopoulos and Tianle Li and Dacheng Li and Hao Zhang and Banghua Zhu and Michael Jordan and Joseph E. Gonzalez and Ion Stoica},
year={2024},
eprint={2403.04132},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
If you want to cite this repo or specific commits for reproducibility, please
include a link to this repo and an exact commit hash or tag.
|