File size: 4,265 Bytes
294d2b3 6eeded6 294d2b3 b776fb7 294d2b3 498a305 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 6eeded6 3d2a02f 498a305 c6e5b29 6fb9526 498a305 6fb9526 498a305 3d2a02f 6eeded6 294d2b3 b776fb7 294d2b3 41cac53 294d2b3 3d2a02f 294d2b3 056b148 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- postbot/multi-emails-hq
metrics:
- accuracy
widget:
- text: 'Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus
will be on the exam'
example_title: email to prof
- text: 'Hey <NAME>,
Thank you for signing up for my weekly newsletter. Before we get started, you''ll
have to confirm your email address.'
example_title: newsletter
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and ask about office hours'
example_title: office hours
- text: 'Greetings <NAME>,
I hope you had a splendid evening at the Company sausage eating festival. I am
reaching out because'
example_title: festival
- text: 'Good Morning Harold,
I was wondering when the next'
example_title: event
- text: URGENT - I need the TPS reports
example_title: URGENT
- text: 'Hi Archibald,
I hope this email finds you extremely well.'
example_title: emails that find you
- text: 'Hello there.
I just wanted to reach out and check in to'
example_title: checking in
- text: 'Hello <NAME>,
I hope this email finds you well. I wanted to reach out and see if you''ve enjoyed
your time with us'
example_title: work well
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and see if we could catch
up'
example_title: catch up
- text: I'm <NAME> and I just moved into the area and wanted to reach out and get
some details on where I could get groceries and
example_title: grocery
inference:
parameters:
min_length: 16
max_length: 64
no_repeat_ngram_size: 4
do_sample: true
top_k: 40
top_p: 0.95
repetition_penalty: 3.5
pipeline_tag: text-generation
base_model: EleutherAI/pythia-160m-deduped
model-index:
- name: pythia-160m-hq-emails-v4
results:
- task:
type: text-generation
name: Causal Language Modeling
dataset:
name: postbot/multi-emails-hq
type: postbot/multi-emails-hq
metrics:
- type: accuracy
value: 0.611281497151223
name: Accuracy
---
# pythia-160m-hq-emails-v4
This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on the postbot/multi-emails-hq dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2856
- Accuracy: 0.6113
- perplexity: 9.8313
## Model description
this is v4
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.412 | 0.99 | 76 | 2.5027 | 0.5458 |
| 1.9702 | 1.99 | 152 | 2.2757 | 0.5850 |
| 1.4628 | 2.99 | 228 | 2.2162 | 0.6082 |
| 1.1662 | 3.99 | 304 | 2.2856 | 0.6113 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__pythia-160m-hq-emails)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.12 |
| ARC (25-shot) | 23.12 |
| HellaSwag (10-shot) | 30.05 |
| MMLU (5-shot) | 26.58 |
| TruthfulQA (0-shot) | 45.51 |
| Winogrande (5-shot) | 50.28 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.31 |
|