File size: 1,654 Bytes
9f28a74 c233da7 9f28a74 c233da7 a3a3ed1 8f8f923 6a4906b 1561fee f7c91c7 f6ed53d 1561fee 9f28a74 ca5f76c 1561fee 9f28a74 1561fee 9f28a74 ca5f76c 6a4906b 9f28a74 87b915e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
dataset_info:
features:
- name: question_id
dtype: string
- name: category
dtype: string
- name: ground_truth
dtype: string
- name: turns
sequence: string
- name: group
dtype: string
- name: movie_name
dtype: string
- name: release_date
dtype: string
- name: task
dtype: string
- name: livebench_release_date
dtype: timestamp[s]
- name: livebench_removal_date
dtype: string
- name: raw_id
dtype: int64
- name: citation
dtype: string
splits:
- name: test
num_bytes: 469547
num_examples: 140
download_size: 278655
dataset_size: 469547
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
arxiv: 2406.19314
---
# Dataset Card for "livebench/language"
LiveBench is a benchmark for LLMs designed with test set contamination and objective evaluation in mind. It has the following properties:
- LiveBench is designed to limit potential contamination by releasing new questions monthly, as well as having questions based on recently-released datasets, arXiv papers, news articles, and IMDb movie synopses.
- Each question has verifiable, objective ground-truth answers, allowing hard questions to be scored accurately and automatically, without the use of an LLM judge.
- LiveBench currently contains a set of 18 diverse tasks across 6 categories, and we will release new, harder tasks over time.
This is the instruction_following category of livebench.
See more in our [paper](https://arxiv.org/abs/2406.19314), [leaderboard](https://livebench.ai/), and [datasheet](https://github.com/LiveBench/LiveBench/blob/main/docs/DATASHEET.md).
|