---

tags:
- generated_from_trainer
metrics:
- accuracy
inference:
  parameters:
    max_new_tokens: 64
    do_sample: true
    repetition_penalty: 1.1
    no_repeat_ngram_size: 5
    guidance_scale: 1.01
    eta_cutoff: 0.001
widget:
- text: My name is El Microondas the Wise and
  example_title: El Microondas
- text: A meme is
  example_title: meme
- text: >-
    Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
    He chose her because she had
  example_title: Coreference resolution
- text: >-
    On a shelf, there are five books: a gray book, a red book, a purple book, a
    blue book, and a black book
  example_title: Logic puzzles
- text: >-
    The two men running to become New York City's next mayor will face off in
    their first debate Wednesday night
  example_title: Reading comprehension
datasets:
- pszemraj/simple_wikipedia_LM
pipeline_tag: text-generation
license: apache-2.0
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# pythia-31m-simplewiki-2048

This was initialized from random weights based on the config of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) and trained on `pszemraj/simple_wikipedia_LM` for 3 epochs.

It achieves the following results on the evaluation set:
- Loss: 3.6874
- Accuracy: 0.4105

## Model description

More information needed

## Intended uses & limitations

This is a baseline for comparison to other models. 

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1
- eval_batch_size: 1
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 6.0657        | 0.22  | 100  | 5.6210          | 0.2414   |
| 5.2447        | 0.45  | 200  | 4.9316          | 0.3054   |
| 4.8397        | 0.67  | 300  | 4.6011          | 0.3343   |
| 4.7933        | 0.9   | 400  | 4.3878          | 0.3530   |
| 4.274         | 1.12  | 500  | 4.2352          | 0.3646   |
| 4.4867        | 1.35  | 600  | 4.1224          | 0.3723   |
| 4.3434        | 1.57  | 700  | 4.0282          | 0.3791   |
| 4.1857        | 1.8   | 800  | 3.9552          | 0.3841   |
| 4.229         | 2.02  | 900  | 3.8890          | 0.3909   |
| 3.9189        | 2.25  | 1000 | 3.8301          | 0.3967   |
| 4.084         | 2.47  | 1100 | 3.7782          | 0.4023   |
| 3.8965        | 2.7   | 1200 | 3.7302          | 0.4069   |
| 3.915         | 2.92  | 1300 | 3.6874          | 0.4105   |


### Framework versions

- Transformers 4.33.1
- Pytorch 2.2.0.dev20230907+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__pythia-31m-simplewiki-2048)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 24.35   |
| ARC (25-shot)         | 22.18          |
| HellaSwag (10-shot)   | 25.55    |
| MMLU (5-shot)         | 23.12         |
| TruthfulQA (0-shot)   | 49.37   |
| Winogrande (5-shot)   | 49.41   |
| GSM8K (5-shot)        | 0.0        |
| DROP (3-shot)         | 0.81         |