rts_t5-small_sql / README.md
howkewlisthat's picture
Update README.md
69d4c8f verified
metadata
dataset_info:
  features:
    - name: answer
      dtype: string
    - name: question
      dtype: string
    - name: context
      dtype: string
    - name: input_ids
      sequence: int32
    - name: labels
      sequence: int64
  splits:
    - name: train
      num_bytes: 788165403
      num_examples: 118695
    - name: test
      num_bytes: 98388509
      num_examples: 14835
    - name: validation
      num_bytes: 98339161
      num_examples: 14838
  download_size: 45704542
  dataset_size: 984893073
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

Dataset used for training text to sql. I've pre-tokenized this for faster loading.

Here is the prompt formation for the tokenizer code:

def tokenize_function(example):
    start_prompt = "Tables:\n"
    middle_prompt = "\n\nQuestion:\n"
    end_prompt = "\n\nAnswer:\n"

    data_zip = zip(example['context'], example['question'])
    prompt = [start_prompt + context + middle_prompt + question + end_prompt for context, question in data_zip]
    example['input_ids'] = tokenizer(prompt, padding="max_length", truncation=True, return_tensors="pt").input_ids
    example['labels'] = tokenizer(example['answer'], padding="max_length", truncation=True, return_tensors="pt").input_ids
#     print(prompt[0])
#     print()

    return example