ChatML-aya_dataset / README.md
Felladrin's picture
Create README.md
4e0d205 verified
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text-generation
annotations_creators:
  - crowdsourced
  - expert-generated
language:
  - amh
  - arb
  - ary
  - ars
  - acq
  - arz
  - apc
  - ben
  - ceb
  - dan
  - deu
  - ell
  - eng
  - eus
  - fil
  - fin
  - fra
  - gle
  - guj
  - hat
  - hau
  - hin
  - hun
  - ibo
  - ind
  - ita
  - jav
  - jpn
  - kan
  - kir
  - kor
  - kur
  - lit
  - mal
  - mar
  - mlg
  - msa
  - mya
  - nep
  - nld
  - nso
  - nya
  - pan
  - pes
  - pol
  - por
  - pus
  - rus
  - sin
  - sna
  - snd
  - som
  - spa
  - sqi
  - srp
  - sun
  - swa
  - swe
  - tam
  - tel
  - tha
  - tur
  - ukr
  - urd
  - vie
  - wol
  - xho
  - yor
  - zho
  - zul
language_creators:
  - crowdsourced
  - expert-generated
multilinguality:
  - multilingual
size_categories:
  - 100K<n<1M

CohereForAI/aya_dataset in ChatML format, ready to use in HuggingFace TRL's SFT Trainer.

Python code used for conversion:

from datasets import load_dataset
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Felladrin/Llama-160M-Chat-v1")

dataset = load_dataset("CohereForAI/aya_dataset", split="train")

def format(columns):
    messages = [
        {
            "role": "user",
            "content": columns["inputs"].strip(),
        },
        {
            "role": "assistant",
            "content": columns["targets"].strip(),
        },
    ]

    return { "text": tokenizer.apply_chat_template(messages, tokenize=False) }

dataset.map(format).select_columns(['text', 'language', 'language_code', 'annotation_type', 'user_id']).to_parquet("train.parquet")