File size: 3,606 Bytes
b57852a e16ad26 b57852a e16ad26 ec4ef4c e16ad26 57642d1 e16ad26 57642d1 e16ad26 5fdb1c2 ec4ef4c 57642d1 c183875 8785f3c e16ad26 38d34dc e16ad26 ec4ef4c b57852a 60ba70f fc52917 60ba70f e16ad26 b57852a 60ba70f b57852a 60ba70f b57852a 60ba70f b57852a 60ba70f b57852a e16ad26 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: apache-2.0
tags:
- generated_from_trainer
- chatgpt
metrics:
- accuracy
model-index:
- name: distilgpt2-HC3
results: []
widget:
- text: >-
Review: Best cast iron skillet you will ever buy. Is this review positive or
negative? <answer>
example_title: Sentiment analysis
- text: >-
Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
He chose her because <answer>
example_title: Coreference resolution
- text: >-
On a shelf, there are five books: a gray book, a red book, a purple book, a
blue book, and a black book. Here's the puzzle, <answer>
example_title: Logic puzzles
- text: >-
The two men running to become New York City's next mayor will face off in
their first debate Wednesday night <answer>
example_title: Reading comprehension
- text: >-
Is it true that if I have five 5-hour energy drinks in a single 24-hour
period, I get 25 hours of energy and spontaneously explode? <answer>
example_title: 5 hour energy
- text: >-
what happens if you train a smaller model on a dataset of
reinforcement-learning optimized model responses? <answer>
example_title: deep learning advice
inference:
parameters:
temperature: 0.6
max_length: 96
no_repeat_ngram_size: 3
repetition_penalty: 1.5
datasets:
- pszemraj/HC3-textgen-qa
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# distilgpt2-HC3
> what happens if you train a smaller model on a dataset of chatGPT responses?
This happens.
![example](https://i.imgur.com/i5snxQJ.png)
## Model description
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the "chatgpt answers" column of the `Hello-SimpleAI/HC3` dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9983
- Accuracy: 0.5441
## Intended uses & limitations
Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time.
## Training and evaluation data
Modifications made w.r.t. original dataset:
- drop all rows that did not have a chatGPT answer
- if a row (_i.e. ELI5 question, etc_) had more than one response (_from chatGPT_), randomly choose one of the responses as the answer to the question
- the "question" and chatGPT answer were combined into a single string for that row as follows: `QUESTION_TEXT <answer> CHATGPT_ANSWER_TEXT <end_answer>`
- `<answer>` and `<end_answer>` serve as added tokens to help the model learn "turns" in the conversation
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3208
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2485 | 0.98 | 41 | 2.1457 | 0.5158 |
| 2.0757 | 1.98 | 82 | 2.0584 | 0.5304 |
| 1.966 | 2.98 | 123 | 2.0210 | 0.5376 |
| 1.8602 | 3.98 | 164 | 2.0012 | 0.5422 |
| 1.8089 | 4.98 | 205 | 1.9977 | 0.5436 |
| 1.7698 | 5.98 | 246 | 1.9983 | 0.5441 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1 |