distilgpt2-HC3 / README.md
pszemraj's picture
Update README.md
3bd75b8
|
raw
history blame
3.35 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
  - chatgpt
metrics:
  - accuracy
model-index:
  - name: distilgpt2-HC3
    results: []
widget:
  - text: >-
      Is this review positive or negative? Review: Best cast iron skillet you
      will ever buy. <answer>
    example_title: Sentiment analysis
  - text: >-
      Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
      He chose her because she had <answer>
    example_title: Coreference resolution
  - text: >-
      On a shelf, there are five books: a gray book, a red book, a purple book,
      a blue book, and a black book <answer>
    example_title: Logic puzzles
  - text: >-
      The two men running to become New York City's next mayor will face off in
      their first debate Wednesday night <answer>
    example_title: Reading comprehension
  - text: >-
      Is it true that if I have five 5-hour energy drinks in a single 24-hour
      period, I get 25 hours of energy and spontaneously explode? <answer>
    example_title: 5 hour energy
inference:
  parameters:
    temperature: 0.6
    max_length: 96
    no_repeat_ngram_size: 2
    repetition_penalty: 2.5
datasets:
  - Hello-SimpleAI/HC3
language:
  - en
library_name: transformers

distilgpt2-HC3

what happens if you train a smaller model on a dataset of chatGPT responses?

This happens.

Model description

This model is a fine-tuned version of distilgpt2 on the "chatgpt answers" column of the Hello-SimpleAI/HC3 dataset.

It achieves the following results on the evaluation set:

  • Loss: 1.9983
  • Accuracy: 0.5441

Intended uses & limitations

Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time.

Training and evaluation data

Modifications made w.r.t. original dataset:

  • drop all rows that did not have a chatGPT answer
  • if a row (i.e. ELI5 question, etc) had more than one response (from chatGPT), randomly choose one of the responses as the answer to the question
  • the "question" and chatGPT answer were combined into a single string for that row as follows: QUESTION_TEXT <answer> CHATGPT_ANSWER_TEXT <end_answer>
    • <answer> and <end_answer> serve as added tokens to help the model learn "turns" in the conversation

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 3208
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 6.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.2485 0.98 41 2.1457 0.5158
2.0757 1.98 82 2.0584 0.5304
1.966 2.98 123 2.0210 0.5376
1.8602 3.98 164 2.0012 0.5422
1.8089 4.98 205 1.9977 0.5436
1.7698 5.98 246 1.9983 0.5441

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.6.1
  • Tokenizers 0.12.1