File size: 1,387 Bytes
de76ba5
b7f3f3c
 
de76ba5
 
 
02946c1
de76ba5
 
c111692
 
 
 
 
de76ba5
90aa982
69deb10
 
5e252db
1794baf
 
69deb10
 
de76ba5
 
 
02946c1
 
90aa982
 
de76ba5
 
03319dc
de76ba5
7c4df63
 
 
 
de76ba5
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
language:
- en
license: apache-2.0
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': question
          '1': request
  splits:
  - name: train
    num_bytes: 9052
    num_examples: 132
  - name: test
    num_bytes: 14391
    num_examples: 182
  download_size: 18297
  dataset_size: 23443
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

This dataset contains manually labeled examples used for training and testing [reddgr/rq-request-question-prompt-classifier](https://huggingface.co/reddgr/rq-request-question-prompt-classifier), a fine-tuning of DistilBERT that classifies chatbot prompts as either 'request' or 'question.'

It is part of a project aimed at identifying metrics to quantitatively measure the conversational quality of text generated by large language models (LLMs) and, by extension, any other type of text extracted from a conversational context (customer service chats, social media posts...).

Relevant Jupyter notebooks and Python scripts that use this dataset and related datasets and models can be found in the following GitHub repository:
[reddgr/chatbot-response-scoring-scbn-rqtl](https://github.com/reddgr/chatbot-response-scoring-scbn-rqtl)
## Labels:
- **0**: Question
- **1**: Request