Quantization made by Richard Erkhov.
dadjokes-tuned-opt - AWQ
- Model creator: https://huggingface.co/gnumanth/
- Original model: https://huggingface.co/gnumanth/dadjokes-tuned-opt/
Original model description:
license: mit base_model: facebook/opt-350m tags: - trl - sft - gnumanth/dadjokes-trained-opt model-index: - name: tmp_trainer results: [] datasets: - gnumanth/dad-jokes language: - en pipeline_tag: text-generation widget: - text: "joke"
This model is a fine-tuned version of facebook/opt-350m on an gnumanth/dad-jokes dataset.
Model description
SFT Trained simple model for fun!
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
Training results
TrainOutput(global_step=18, training_loss=2.2378472222222223, metrics={'train_runtime': 149.7511, 'train_samples_per_second': 0.881, 'train_steps_per_second': 0.12, 'total_flos': 9828797644800.0, 'train_loss': 2.2378472222222223, 'epoch': 3.0})
Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.1