|
--- |
|
language: |
|
- en |
|
dataset_info: |
|
features: |
|
- name: instruction |
|
dtype: string |
|
- name: context |
|
dtype: string |
|
- name: response |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: messages |
|
list: |
|
- name: content |
|
dtype: string |
|
- name: role |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 34692013 |
|
num_examples: 15011 |
|
download_size: 15166632 |
|
dataset_size: 34692013 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
task_categories: |
|
- question-answering |
|
- text2text-generation |
|
--- |
|
# Dataset Card for "databricks-dolly-15k-chatml" |
|
## Dataset Summary |
|
This dataset has been created by **Re:cast AI** to transform the existing dataset [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) into a [chatml](https://huggingface.co/docs/transformers/main/en/chat_templating) friendly format for use in SFT tasks with pretrained models. |
|
|
|
|
|
|
|
## Dataset Structure |
|
```python |
|
messages = [ |
|
{ "content": "You are an expert Q&A system that is trusted around the world. You always... etc.", "role": "system" }, |
|
{ "content": "(Optional) Context information is below.\n----------------\nVirgin Australia, the... etc.", "role": "user" }, |
|
{ "content": "Virgin Australia commenced services on 31 August 2000... etc.", "role": "assistant" } ] |
|
] |
|
``` |
|
|
|
## Usage |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("recastai/databricks-dolly-15k-chatml", split="train") |
|
``` |
|
|
|
## Processing applied to original dataset |
|
```python |
|
INSTRUCTIONS = """You are an expert Q&A system that is trusted around the world. You always answer the user's query in a helpful and friendly way. |
|
Some rules you always follow: |
|
1. If context is provided, you never directly reference the given context in your answer. |
|
2. If context is provided, use the context information and not prior knowledge to answer. |
|
3. Avoid statements like 'Based on the context, ...' or 'The context information ...' or 'The answer to the user's query...' or anything along those lines. |
|
4. If no context is provided use your internal knowledge to answer.""" |
|
|
|
# databricks-dolly-15k features: |
|
# - instruction: The user query/question |
|
# - context: (optional) context to use to help the assistant |
|
# - response: The assistant's response to the query/question |
|
# |
|
key_mapping = dict( |
|
query = "instruction", |
|
context = "context", |
|
response = "response" |
|
) |
|
|
|
def process_chatml_fn(example, validation=False): |
|
""" |
|
Processing specific to databricks-dolly-15k into a chat format. |
|
""" |
|
user_content = ( |
|
"(Optional) Context information is below.\n" |
|
"----------------\n" |
|
"{context}\n" |
|
"----------------\n" |
|
"Answer the following query.\n" |
|
"{query}\n" |
|
) |
|
assistant_content = "{response}" |
|
|
|
message = [ |
|
{"role": "system", "content": INSTRUCTIONS}, |
|
{"role": "user", "content": user_content.format(context=example[key_mapping['context']], query=example[key_mapping['query']])}, |
|
{"role": "assistant", "content": assistant_content.format(response=example[key_mapping['response']])} |
|
] |
|
|
|
return message |
|
``` |