|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: label |
|
dtype: |
|
class_label: |
|
names: |
|
'0': question |
|
'1': request |
|
splits: |
|
- name: train |
|
num_bytes: 9052 |
|
num_examples: 132 |
|
- name: test |
|
num_bytes: 14391 |
|
num_examples: 182 |
|
download_size: 18297 |
|
dataset_size: 23443 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
This dataset contains manually labeled examples used for training and testing [reddgr/rq-request-question-prompt-classifier](https://huggingface.co/reddgr/rq-request-question-prompt-classifier), a fine-tuning of DistilBERT that classifies chatbot prompts as either 'request' or 'question.' |
|
|
|
It is part of a project aimed at identifying metrics to quantitatively measure the conversational quality of text generated by large language models (LLMs) and, by extension, any other type of text extracted from a conversational context (customer service chats, social media posts...). |
|
|
|
Relevant Jupyter notebooks and Python scripts that use this dataset and related datasets and models can be found in the following GitHub repository: |
|
[reddgr/chatbot-response-scoring-scbn-rqtl](https://github.com/reddgr/chatbot-response-scoring-scbn-rqtl) |
|
## Labels: |
|
- **0**: Question |
|
- **1**: Request |
|
|
|
|