File size: 4,242 Bytes
b57852a
bb0773e
 
b57852a
bb0773e
b57852a
 
e16ad26
709420f
bb0773e
 
b57852a
 
e16ad26
bb0773e
 
e16ad26
bb0773e
57642d1
e16ad26
bb0773e
 
e16ad26
bb0773e
e16ad26
 
bb0773e
 
e16ad26
bb0773e
 
57642d1
c183875
 
 
 
f648ac4
8785f3c
f648ac4
6f9ad47
ec4ef4c
bb0773e
 
 
b57852a
 
 
 
 
60ba70f
 
 
 
 
fc52917
 
60ba70f
 
e16ad26
 
b57852a
 
 
 
 
 
 
60ba70f
b57852a
 
 
60ba70f
b57852a
60ba70f
 
 
 
 
b57852a
 
60ba70f
b57852a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bb0773e
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- generated_from_trainer
- chatgpt
- HC3
datasets:
- pszemraj/HC3-textgen-qa
metrics:
- accuracy
widget:
- text: 'Review: Best cast iron skillet you will ever buy. Is this review positive
    or negative? <answer>'
  example_title: Sentiment analysis
- text: Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
    He chose her because <answer>
  example_title: Coreference resolution
- text: 'On a shelf, there are five books: a gray book, a red book, a purple book,
    a blue book, and a black book. Here''s the puzzle, <answer>'
  example_title: Logic puzzles
- text: The two men running to become New York City's next mayor will face off in
    their first debate Wednesday night <answer>
  example_title: Reading comprehension
- text: Is it true that if I have five 5-hour energy drinks in a single 24-hour period,
    I get 25 hours of energy and spontaneously explode? <answer>
  example_title: 5 hour energy
- text: what happens if you train a smaller model on a dataset of reinforcement-learning
    optimized model responses? <answer>
  example_title: deep learning advice
inference:
  parameters:
    temperature: 0.6
    max_length: 96
    no_repeat_ngram_size: 4
    repetition_penalty: 1.5
    eta_cutoff: 0.0008
    renormalize_logits: true
pipeline_tag: text-generation
model-index:
- name: distilgpt2-HC3
  results: []
---


# distilgpt2-HC3


> what happens if you train a smaller model on a dataset of chatGPT responses?

This happens.

![example](https://i.imgur.com/i5snxQJ.png)

## Model description

This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the "chatgpt answers" column of the `Hello-SimpleAI/HC3` dataset.

It achieves the following results on the evaluation set:
- Loss: 1.9983
- Accuracy: 0.5441


## Intended uses & limitations

Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time.

## Training and evaluation data

Modifications made w.r.t. original dataset:

- drop all rows that did not have a chatGPT answer 
- if a row (_i.e. ELI5 question, etc_) had more than one response (_from chatGPT_), randomly choose one of the responses as the answer to the question
- the "question" and chatGPT answer were combined into a single string for that row as follows: `QUESTION_TEXT <answer> CHATGPT_ANSWER_TEXT <end_answer>`
  - `<answer>` and `<end_answer>` serve as added tokens to help the model learn "turns" in the conversation
 
## Training procedure


### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3208
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2485        | 0.98  | 41   | 2.1457          | 0.5158   |
| 2.0757        | 1.98  | 82   | 2.0584          | 0.5304   |
| 1.966         | 2.98  | 123  | 2.0210          | 0.5376   |
| 1.8602        | 3.98  | 164  | 2.0012          | 0.5422   |
| 1.8089        | 4.98  | 205  | 1.9977          | 0.5436   |
| 1.7698        | 5.98  | 246  | 1.9983          | 0.5441   |


### Framework versions

- Transformers 4.27.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__distilgpt2-HC3)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |28.18|
|AI2 Reasoning Challenge (25-Shot)|24.66|
|HellaSwag (10-Shot)              |27.99|
|MMLU (5-Shot)                    |23.95|
|TruthfulQA (0-shot)              |42.10|
|Winogrande (5-shot)              |50.36|
|GSM8k (5-shot)                   | 0.00|